Cloudformation: Why does RDS instance creates default security group when i am already creating one while initating RDS? - mysqli

This is my RDS instance, I am creating a security group which gives access to my Workbench and backend code. RDS creates default security group, which overlaps the security group i create and thus makes it not accessible. How can i stop RDS create default security group.
Here is my RDS template
"Resources": {
"epmoliteDB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBName": {"Ref": "DBname"},
"DBSecurityGroups": [{"Ref": "DBSecurityGroup"}],
"AllocatedStorage": "5",
"DBInstanceClass": "db.t2.micro",
"Engine": "MySQL",
"MasterUsername": {"Ref": "DBuser"},
"MasterUserPassword": {"Ref": "DBpass"},
"DBParameterGroupName": {"Ref": "epmoliteDBParameterGroup"}
}
},
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"DBSecurityGroupIngress": {
"EC2SecurityGroupName": {"Ref": "WebServerSecurityGroup"}
},
"GroupDescription": "Frontend Access"
}
},
"WebServerSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription" : "Enable MYSQL access via port 3306",
"SecurityGroupIngress": [{
"IpProtocol": "tcp","FromPort": "3306","ToPort": "3306","CidrIp": "0.0.0.0/0"
}]
}
},
"epmoliteDBParameterGroup": {
"Type": "AWS::RDS::DBParameterGroup",
"Properties" : {
"Description" : "Parameter group to avoid schema import errors",
"Family" : "MySQL5.7",
"Parameters" : {
"log_bin_trust_function_creators": "1"
}
}
}
}

I can't exactly explain why a default security group is created and overlap with the one you specified. What I can tell you though is that you should really rely on VPCSecurityGroups which replaces the old DBSecurityGroups which was relevant in "EC2 Classic" (before the VPC era). Perhaps this will solve the issue.
There's an article in the doc to learn more about this: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Compare.

Related

Launch an ec2 instance with cloudformation

I am trying to launch an ec2 instance using cloudformation.I created this json template but I get error Template format error: At least one Resources member must be defined.
{
"Type" : "AWS::EC2::Instance",
"Properties" : {
"ImageId" : "ami-08ddb3f251a88cf33",
"InstanceType" : "t2.micro ",
"KeyName" : "Stagingkey",
"LaunchTemplate" : {
"LaunchTemplateId" : "jen1",
"LaunchTemplateName" : "Launchinstance",
"Version":"V1"
},
"SecurityGroupIds" : [ "sg-055f49a32efd4238b" ],
"SecurityGroups" : [ "jenkins_group" ],
}
}
What am I doing wrong?
Is there any other template for ap-south-1 region which I could use? Any help would be appreciated.
The error says it all: At least one Resources member must be defined.
The major sections of a template are:
Parameters
Mappings
Resources
Outputs
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "My Stack",
"Resources": {
"MyInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-08ddb3f251a88cf33",
"InstanceType": "t2.micro ",
"KeyName": "Stagingkey",
"LaunchTemplate": {
"LaunchTemplateId": "jen1",
"LaunchTemplateName": "Launchinstance",
"Version": "V1"
},
"SecurityGroupIds": [
"sg-055f49a32efd4238b"
],
"SecurityGroups": [
"jenkins_group"
]
}
}
}
}
You'll need to test it. For example, it is unlikely that you will define both SecurityGroupIds and SecurityGroups.
All the properties you have entered are properties of an EC2 resource, which you need to declare. You have no resources block/a logical name for you resource, like so:
"Resources": {
"MyTomcatName": {
"Type": "AWS::EC2::Instance",
"Properties": {
[...]

Azure REST API does not return encryption settings for Virtual Machine

I have a 16.04-LTS Ubuntu Virtual Machine in my Azure account and I am trying Azure Disk Encryption for this virtual machine making use of this azure cli sample script. On running the encryption script, the azure portal shows its OS disk is encrypted. There is Enabled under Encryption header.
However, the Azure REST API (api link) for getting information about the virtual machine does not return the encryptionSettings under properties.storageProfile.osDisk. I tried both Model View and Model View and Instance View for the api-version 2017-03-30 as well as 2017-12-01. Here is the partial response from the API:
{
"name": "ubuntu",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "16.04-LTS",
"publisher": "Canonical",
"version": "latest",
"offer": "UbuntuServer"
},
"osDisk": {
"name": "ubuntu-OsDisk",
"diskSizeGB": 30,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Linux"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": ""
}
But for my other encrypted windows virtual machine, I get the correct response which contains encryptionSettings in properties.storageProfile.osDisk:
{
"name": "win1",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "2016-Datacenter-smalldisk",
"publisher": "MicrosoftWindowsServer",
"version": "latest",
"offer": "WindowsServer"
},
"osDisk": {
"name": "win1_OsDisk_1",
"diskSizeGB": 31,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"encryptionSettings": {
"diskEncryptionKey": {
"secretUrl": "...",
"sourceVault": {
"id": "..."
}
},
"keyEncryptionKey": {
"keyUrl": "...",
"sourceVault": {
"id": "..."
}
},
"enabled": true
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Windows"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "...",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": "..."
}
Why is the Virtual Machine Get API not returning the encryptionSettings for some VMs? Any help would be greatly appreciated.
I create VM using following command.
az vm create \
--resource-group shuivm \
--name shuivm \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
When I use the following API, I could get encryption setting.
https://management.azure.com/subscriptions/**********/resourceGroups/shuivm/providers/Microsoft.Compute/virtualMachines/shuivm?api-version=2017-03-30"
Note: When OS is encrypted successful, I could use API to get encryption setting.
This is because there are two types of at-rest disk encryption for Azure VMs and they are not reported in the same part of the Azure Management API:
Server-Side Encryption: that you can see in the encryptionSettings section of the VM/compute API when you get a vm details. It will show whether you are encypting with a customer managed key or a platform managed key
ADE: Azure Disk Encryption is actually a VM extension and so you can find it in the VM Extension API instead.
see: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachineextensions/list

Update Stack w. "Replacement: Conditional"; will it replace?

Just updated some of our CF-templates and was going to update the stack it refers to. I've added some default values and added a CloudWatch alarm and I am also going to downgrade the instance from m4.xlarge to m4.large.
I've already downgraded the instance in the EC2-GUI and it went fine. I then reverted it to its default state as per the original template i.e. m4.xlarge.
However when I modify the default value in the template for the instancetype it does not reflect when I upload the modified template to CloudFormation.
Meaning the default value is still m4.xlarge and I have to use the drop-down menu to select m4.large as specified in my template.
If I don't change the instancetype I get:
"Replacement: False" but if I update the instancetype I get "Replacement: Conditional".
If I read more under "Changeset Details" and then "Details" I see:
[
{
"resourceChange": {
"logicalResourceId": "CPUAlarm",
"action": "Add",
"physicalResourceId": null,
"resourceType": "AWS::CloudWatch::Alarm",
"replacement": null,
"details": [],
"scope": []
},
"type": "Resource"
},
{
"resourceChange": {
"logicalResourceId": "myInstanceName",
"action": "Modify",
"physicalResourceId": "<masked>",
"resourceType": "AWS::EC2::Instance",
"replacement": "Conditional",
"details": [
{
"target": {
"name": null,
"requiresRecreation": "Never",
"attribute": "Tags"
},
"causingEntity": null,
"evaluation": "Dynamic",
"changeSource": "DirectModification"
},
{
"target": {
"name": null,
"requiresRecreation": "Never",
"attribute": "Tags"
},
"causingEntity": "Project",
"evaluation": "Static",
"changeSource": "ParameterReference"
},
{
"target": {
"name": null,
"requiresRecreation": "Never",
"attribute": "Tags"
},
"causingEntity": null,
"evaluation": "Static",
"changeSource": null
},
{
"target": {
"name": "InstanceType",
"requiresRecreation": "Conditionally",
"attribute": "Properties"
},
"causingEntity": "InstanceType",
"evaluation": "Static",
"changeSource": "ParameterReference"
},
{
"target": {
"name": "InstanceType",
"requiresRecreation": "Conditionally",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Dynamic",
"changeSource": "DirectModification"
}
],
"scope": [
"Properties",
"Tags"
]
},
"type": "Resource"
},
{
So what I can see is that:
"name": "InstanceType","requiresRecreation": "Conditionally", is the only value that has a more restrictive value and therefore the entire stack gets "Replacement: Conditional".
As per AWS:
"In some cases, AWS CloudFormation can determine a value only after you execute a change set. AWS CloudFormation labels those changes as Dynamic evaluations. For example, if you reference an updated resource that is conditionally replaced, AWS CloudFormation can't determine whether the reference to the updated resource will change."
Source:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets-samples.html#using-cfn-updating-stacks-changesets-samples-directly-editing-a-template
AFAIK "Replacement: Conditional" 'might' replace the resource i.e. creating a new physicalResourceID which in turn forces me to change associated SGs etc. but it might also not do it, correct?
Grateful for any assistance!
It's generally not recommended to modify resources created via CloudFormation, outside of CloudFormation. For your last question, Replacement: Conditional may or may not necessitate replacement of a resource based on what exactly you're trying to do. It's always helpful to look at the AWS CloudFormation docs whenever you have doubts though e.g. in your specific scenario of editing instance type of an EC2 instance, here's the what docs state:
Update requires: Some interruptions for Amazon EBS-backed instances
Update requires: Replacement for instance store-backed instances
Ref: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-instancetype

Exporting a AWS Postgres RDS Table to AWS S3

I wanted to use AWS Data Pipeline to pipe data from a Postgres RDS to AWS S3. Does anybody know how this is done?
More precisely, I wanted to export a Postgres Table to AWS S3 using data Pipeline. The reason I am using Data Pipeline is I want to automate this process and this export is going to run once every week.
Any other suggestions will also work.
There is a sample on github.
https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RDStoS3
Here is the code:
https://github.com/awslabs/data-pipeline-samples/blob/master/samples/RDStoS3/RDStoS3Pipeline.json
You can define a copy-activity in the Data Pipeline interface to extract data from a Postgres RDS instance into S3.
Create a data node of the type SqlDataNode. Specify table name and select query.
Setup the database connection by specifying RDS instance ID (the instance ID is in your URL, e.g. your-instance-id.xxxxx.eu-west-1.rds.amazonaws.com) along with username, password and database name.
Create a data node of the type S3DataNode.
Create a Copy activity and set the SqlDataNode as input and the S3DataNode as output.
Another option is to use an external tool like Alooma. Alooma can replicate tables from PostgreSQL database hosted Amazon RDS to Amazon S3 (https://www.alooma.com/integrations/postgresql/s3). The process can be automated and you can run it once a week.
I built a Pipeline from scratch using the MySQL and the documentation as reference.
You need to have the roles on place, DataPipelineDefaultResourceRole && DataPipelineDefaultRole.
I haven't load the parameters, so, you need to get into the architech and put your credentials and folders.
Hope it helps.
{
"objects": [
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "#{myS3LogsPath}",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"database": {
"ref": "DatabaseId_WC2j5"
},
"name": "DefaultSqlDataNode1",
"id": "SqlDataNodeId_VevnE",
"type": "SqlDataNode",
"selectQuery": "#{myRDSSelectQuery}",
"table": "#{myRDSTable}"
},
{
"*password": "#{*myRDSPassword}",
"name": "RDS_database",
"id": "DatabaseId_WC2j5",
"type": "RdsDatabase",
"rdsInstanceId": "#{myRDSId}",
"username": "#{myRDSUsername}"
},
{
"output": {
"ref": "S3DataNodeId_iYhHx"
},
"input": {
"ref": "SqlDataNodeId_VevnE"
},
"name": "DefaultCopyActivity1",
"runsOn": {
"ref": "ResourceId_G9GWz"
},
"id": "CopyActivityId_CapKO",
"type": "CopyActivity"
},
{
"dependsOn": {
"ref": "CopyActivityId_CapKO"
},
"filePath": "#{myS3Container}#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"id": "S3DataNodeId_iYhHx",
"type": "S3DataNode"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"instanceType": "m1.medium",
"name": "DefaultResource1",
"id": "ResourceId_G9GWz",
"type": "Ec2Resource",
"terminateAfter": "30 Minutes"
}
],
"parameters": [
]
}
You can now do this with aws_s3.query_export_to_s3 command within postgres itself https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html

Grant access to RDS layer using Cloudformation for app layer

I have an RDS database that I bring up using Cloudformation. Now I have a Cloudformation document that brings up my app server tier. How can I grant my app servers access to the RDS instance?
If the RDS instance was created by my Cloudformation document, I know I could do this:
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"EC2VpcId" : { "Ref" : "VpcId" },
"DBSecurityGroupIngress": { "EC2SecurityGroupId": { "Fn::GetAtt": [ "AppServerSecurityGroup", "GroupId" ]} },
"GroupDescription" : "Frontend Access"
}
}
But the DBSecurityGroup will already exist by the time I run my app cloudformation. How can I update it?
Update Following what huelbois pointed out to me below, I understood that I could just create an AWS::EC2::SecurityGroupIngress in my app Cloudformation. As I am using a VPC and the code huelbois posted is for classic, I can confirm that this works:
In RDS Cloudformation:
"DbVpcSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable JDBC access on the configured port",
"VpcId" : { "Ref" : "VpcId" },
"SecurityGroupIngress" : [ ]
}
}
And in app Cloudformation:
"specialRDSRule" : {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties" : {
"IpProtocol": "tcp",
"FromPort": 5432,
"ToPort": 5432,
"GroupId": {"Ref": "DbSecurityGroupId"},
"SourceSecurityGroupId": {"Ref": "InstanceSecurityGroup"}
}
}
where DbSecurityGroupId is the id of the group setup above (something like sg-27324c43) and is a parameter to the app Cloudformation document.
When you want to use already existing resources in a CloudFormation template, you can use the previously created ids, instead of Ref or GetAtt.
In your example, you can use:
{ "EC2SecurityGroupId": "sg-xxxNNN" }
where "sg-xxxNNN" is the id of your DB SecurityGroup (not sure of the DB SecurityGroup prefix, since we don't use EC2-classic but VPC).
I would recommend using a parameter for your SecurityGroup in your template.
*** update **
For your specific setup, I would use a "DBSecurityGroupIngress" resource, to add a new sg to your RDS instance.
In your first stack (RDS), you create an empty DBSecurityGroup like this:
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"EC2VpcId" : { "Ref" : "VpcId" },
"DBSecurityGroupIngress": [],
"GroupDescription" : "Frontend Access"
}
}
This DBSecurityGroup is refered to by the DBInstance. (I guess you have specific requisites for using DBSecurityGroup instead of VPCSecurityGroup).
In your App stack, you create a DBSecurityGroupIngress resource, which is a child of the DBSecurityGroup your created in the first stack:
"specialRDSRule" : {
"Type":"AWS::RDS::DBSecurityGroupIngress",
"Properties" : {
"DBSecurityGroupName": "<the arn of the DBSecurityGroup>",
"CIDRIP": String,
"EC2SecurityGroupId": String,
"EC2SecurityGroupName": String,
"EC2SecurityGroupOwnerId": String
}
}
You need the arn of the DBSecurityGroup, which is "arn:aws:rds:::secgrp:". The other parameters come from your App stack, not sure if you need everything (I don't do EC2-classic security groups, only VPC).
Reference : http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-security-group-ingress.html
We use the same mechanism with VPC SecurityGroups, with Ingress & Egress rules, so we can have two SG reference each-other.