I first restored an Aurora RDS Cluster using a cluster snapshot with a cloud formation template. Then removed the snapshot identifier, updated the password and performed a stack update keeping everything else unchanged in the CFT. But stack always prints the
Requested update requires the creation of a new physical resource;
hence creating one.
message and start creating a new cluster. Here is my CFT for the cluster.
"DatabaseCluster": {
"Type": "AWS::RDS::DBCluster",
"DeletionPolicy": "Snapshot",
"Properties": {
"BackupRetentionPeriod": {
"Ref": "BackupRetentionPeriod"
},
"Engine": "aurora-postgresql",
"EngineVersion": {
"Ref": "EngineVersion"
},
"Port": {
"Ref": "Port"
},
"MasterUsername": {
"Fn::If" : [
"isUseDBSnapshot",
{"Ref" : "AWS::NoValue"},
{"Ref" : "MasterUsername"}
]
},
"MasterUserPassword": {
"Fn::If" : [
"isUseDBSnapshot",
{"Ref" : "AWS::NoValue"},
{"Ref" : "MasterPassword"}
]
},
"DatabaseName": {
"Fn::If" : [
"isUseDBSnapshot",
{"Ref" : "AWS::NoValue"},
{"Ref" : "DBName"}
]
},
"SnapshotIdentifier" : {
"Fn::If" : [
"isUseDBSnapshot",
{"Ref" : "SnapshotIdentifier"},
{"Ref" : "AWS::NoValue"}
]
},
"PreferredBackupWindow": "01:00-02:00",
"PreferredMaintenanceWindow": "mon:03:00-mon:04:00",
"DBSubnetGroupName": {"Ref":"rdsDbSubnetGroup"},
"StorageEncrypted":{"Ref" : "StorageEncrypted"},
"DBClusterParameterGroupName": {"Ref" : "RDSDBClusterParameterGroup"},
"VpcSecurityGroupIds": [{"Ref" : "CommonSGId"}]
}
}
According to the AWS RDS CFT doc MasterUserPassword update doesn't need a cluster replacement.
Is there anything wrong with my CFT or is this an issue with AWS?
If you just wish to update the password of the DB instance, you shouldn't remove the Snapshot identifier. I understand that you might be worried of losing data if the snapshot is being restored.
However, that is not the case with Cloudformation. Cloudformation precisely checks what changes you have made and performs a relevant operation. If you are changing just the password, then it will not tamper your data - whatever state it is in.
However, if you remove the snapshot identifier means you want to change the DB and remove the snapshot from it. So it will replace your DB instance.
Check the below link for more details on what happens on changing each parameter.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-snapshotidentifier
It clearly specifies that any chance in snapshot identifier will result in replacement
Related
This is my RDS instance, I am creating a security group which gives access to my Workbench and backend code. RDS creates default security group, which overlaps the security group i create and thus makes it not accessible. How can i stop RDS create default security group.
Here is my RDS template
"Resources": {
"epmoliteDB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBName": {"Ref": "DBname"},
"DBSecurityGroups": [{"Ref": "DBSecurityGroup"}],
"AllocatedStorage": "5",
"DBInstanceClass": "db.t2.micro",
"Engine": "MySQL",
"MasterUsername": {"Ref": "DBuser"},
"MasterUserPassword": {"Ref": "DBpass"},
"DBParameterGroupName": {"Ref": "epmoliteDBParameterGroup"}
}
},
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"DBSecurityGroupIngress": {
"EC2SecurityGroupName": {"Ref": "WebServerSecurityGroup"}
},
"GroupDescription": "Frontend Access"
}
},
"WebServerSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription" : "Enable MYSQL access via port 3306",
"SecurityGroupIngress": [{
"IpProtocol": "tcp","FromPort": "3306","ToPort": "3306","CidrIp": "0.0.0.0/0"
}]
}
},
"epmoliteDBParameterGroup": {
"Type": "AWS::RDS::DBParameterGroup",
"Properties" : {
"Description" : "Parameter group to avoid schema import errors",
"Family" : "MySQL5.7",
"Parameters" : {
"log_bin_trust_function_creators": "1"
}
}
}
}
I can't exactly explain why a default security group is created and overlap with the one you specified. What I can tell you though is that you should really rely on VPCSecurityGroups which replaces the old DBSecurityGroups which was relevant in "EC2 Classic" (before the VPC era). Perhaps this will solve the issue.
There's an article in the doc to learn more about this: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Compare.
I am creating a distributed database in orientDB 2.2.6 with 3 nodes, namely master1, master2 and master3. I modified the hazelcast.xml and orientdb.server.config.xml files on each of the nodes. I used a common default-distributed-db-config.json on all 3 nodes which looks like as shown below.
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": "majority",
"executionMode": "undefined",
"readYourWrites": true,
"failureAvailableNodesLessQuorum": false,
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"address": {
"owner" : "master1",
"servers": [ "master1" ]
},
"address_1": {
"owner" : "master1",
"servers" : [ "master1" ]
},
"ip": {
"owner" : "master2",
"servers" : [ "master2" ]
},
"ip_1": {
"owner" : "master2",
"servers" : [ "master2" ]
},
"id": {
"owner" : "master3",
"servers" : [ "master3" ]
},
"id_1": {
"owner" : "master3",
"servers" : [ "master3" ]
},
"*": {
"servers": [ "<NEW_NODE>" ]
}
}
}
Then I started the distributed server in the master1 machine, master2 and master3 in this order and let them synchronize the default DB. Then I created a database and three classes(Address, IP, ID) and their properties and indexes in the master1 machine. As I mentioned in the default-distributed-db-config.json file, Address class has two clusters and they are residing in the master1 machine. Class IP has two clusters and they reside in master2 machine.
When I insert values into Address class, as expected they are getting into master1 machine's clusters, following the round-robin strategy. But when I insert values for IP from the master2 machine they are creating a cluster in master1 and inserting into the new cluster. Basically, all the values are getting into master1 machine. When I do List Clusters, the clusters in master2 and master3 machines are empty.
So, I could not distribute the data across the three nodes. It basically stores the data into single machine. How to shard the data ? Is there any issue with the way I am trying to insert the data ?
Thanks
In current OrientDB releases, write operations (create/update/delete) are not forwarded. only the reads are. For this reason, the client should be connected to the server that handles the cluster you want your data written to.
Usually, this isn't a problem, because a local cluster is selected, but if you want to write on a specific cluster on a remote server this is not supported yet.
I'm using Fiware Orion Context Broker version 0.20.
I notice that when I create a context subscription, my provided endpoint immediately gets notified about changes to the corresponding context elements that happened before I created the subscription.
To clarify: (note: I used these steps with a clean database)
I started the accumulator from the test package /usr/share/contextBroker/tests/accumulator-server.py 1028 /accumulate on
Created a context element using http://localhost:1026/v1/updateContext
{
"contextElements": [
{
"type": "Room",
"isPattern": "false",
"id": "Room1",
"attributes": [
{
"name": "temperature",
"value": "20"
}
]
}
],
"updateAction": "APPEND"
}
Then I created the subscription using http://localhost:1026/v1/subscribeContext
{
"entities": [
{
"type": "Room",
"isPattern": "true",
"id": ".*"
}
],
"attributes": [
"temperature"
],
"reference": "http://localhost:1028/accumulate",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"temperature"
]
}
],
"throttling": "PT5S"
}
I immediately receive the following content in the accumulator
POST http://localhost:1028/accumulate
Content-Length: 472
User-Agent: orion/0.20.0 libcurl/7.19.7
Host: localhost:1028
Accept: application/xml, application/json
Content-Type: application/json
{
"subscriptionId" : "55521671985dc3976b879780",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"type" : "Room",
"isPattern" : "false",
"id" : "Room1",
"attributes" : [
{
"name" : "temperature",
"type" : "",
"value" : "20"
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
Furthermore, if I create multiple contextElements before adding the subscription, they are all part to the contextResponses in the notification.
For my use case, this behavior is undesirable. The subscriptions are very dynamic (they come and go often throughout the lifecycle of the application) and I do not want the entire history every time I create a subscription. I only want to be notified about changes starting from the moment T created the subscription. (Not a history)
Did I overlook something in the documentation and can I resolve this by changing the contents of the subscription request? If not, is this generally accepted behavior for the context broker or just a plain bug?
It is the expected behaviour, as described in the manual:
You may wonder why accumulator-server.py is getting this message if you don't actually do any update. This is because the Orion Context Broker considers the transition from "non existing subscription" to "subscribed" as a change.
We understand that for some uses cases this is not convenient. However, behaving in the opossite way ruins another uses cases which need to know the "inicial state" before starting getting notifications corresponding to actual changes. The best solution to make everybody happy is to make this configurable, so each client can chose what it prefers. This feature is currently in our roadmap (see this issue in github.com).
While this gets implemented in Orion, in your case maybe a possible workaround is just ignore the first received nofitication belonging to a subscription (you can identify the subscription to which one notification belongs by the subscriptionId field in the notification payload). All the following notifications beloning to that subscription will correspond to actual changes.
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).
I have an RDS database that I bring up using Cloudformation. Now I have a Cloudformation document that brings up my app server tier. How can I grant my app servers access to the RDS instance?
If the RDS instance was created by my Cloudformation document, I know I could do this:
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"EC2VpcId" : { "Ref" : "VpcId" },
"DBSecurityGroupIngress": { "EC2SecurityGroupId": { "Fn::GetAtt": [ "AppServerSecurityGroup", "GroupId" ]} },
"GroupDescription" : "Frontend Access"
}
}
But the DBSecurityGroup will already exist by the time I run my app cloudformation. How can I update it?
Update Following what huelbois pointed out to me below, I understood that I could just create an AWS::EC2::SecurityGroupIngress in my app Cloudformation. As I am using a VPC and the code huelbois posted is for classic, I can confirm that this works:
In RDS Cloudformation:
"DbVpcSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable JDBC access on the configured port",
"VpcId" : { "Ref" : "VpcId" },
"SecurityGroupIngress" : [ ]
}
}
And in app Cloudformation:
"specialRDSRule" : {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties" : {
"IpProtocol": "tcp",
"FromPort": 5432,
"ToPort": 5432,
"GroupId": {"Ref": "DbSecurityGroupId"},
"SourceSecurityGroupId": {"Ref": "InstanceSecurityGroup"}
}
}
where DbSecurityGroupId is the id of the group setup above (something like sg-27324c43) and is a parameter to the app Cloudformation document.
When you want to use already existing resources in a CloudFormation template, you can use the previously created ids, instead of Ref or GetAtt.
In your example, you can use:
{ "EC2SecurityGroupId": "sg-xxxNNN" }
where "sg-xxxNNN" is the id of your DB SecurityGroup (not sure of the DB SecurityGroup prefix, since we don't use EC2-classic but VPC).
I would recommend using a parameter for your SecurityGroup in your template.
*** update **
For your specific setup, I would use a "DBSecurityGroupIngress" resource, to add a new sg to your RDS instance.
In your first stack (RDS), you create an empty DBSecurityGroup like this:
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"EC2VpcId" : { "Ref" : "VpcId" },
"DBSecurityGroupIngress": [],
"GroupDescription" : "Frontend Access"
}
}
This DBSecurityGroup is refered to by the DBInstance. (I guess you have specific requisites for using DBSecurityGroup instead of VPCSecurityGroup).
In your App stack, you create a DBSecurityGroupIngress resource, which is a child of the DBSecurityGroup your created in the first stack:
"specialRDSRule" : {
"Type":"AWS::RDS::DBSecurityGroupIngress",
"Properties" : {
"DBSecurityGroupName": "<the arn of the DBSecurityGroup>",
"CIDRIP": String,
"EC2SecurityGroupId": String,
"EC2SecurityGroupName": String,
"EC2SecurityGroupOwnerId": String
}
}
You need the arn of the DBSecurityGroup, which is "arn:aws:rds:::secgrp:". The other parameters come from your App stack, not sure if you need everything (I don't do EC2-classic security groups, only VPC).
Reference : http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-security-group-ingress.html
We use the same mechanism with VPC SecurityGroups, with Ingress & Egress rules, so we can have two SG reference each-other.
I created a CloudFormation template and I wanted to create IAM user, to do that I used this JSON string:
"CFNUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"LoginProfile": {
"Password": { "Ref" : "AdminPassword" }
}
}
},
Then for group I used this:
"CFNUserGroup" : {
"Type" : "AWS::IAM::Group"
},
After creating the stack, I got the following:
user name - IAMUsers-CFNUser-E1BT342YK7G6
group name - IAMUsers-CFNUserGroup-1UBUBRYALTIMI
So my question is, how can I set the user name here? same goes for the group name?
After talking with one of the AWS support, at this time of writing, it is not possible to specify your own username and group name in IAM using CloudFormation template :-(
Maybe there's a reason why they do not allow user to do this...anyway it's good thing that I have answer to this question and I will be glad if someone find this useful.
Amazon has added support from 20th July 2016.
https://aws.amazon.com/about-aws/whats-new/2016/07/aws-cloudformation-adds-support-for-aws-iot-and-additional-updates/
{
"Type": "AWS::IAM::User",
"Properties": {
"Groups": [ String, ... ],
"LoginProfile": LoginProfile Type,
"ManagedPolicyArns": [ String, ... ],
"Path": String,
"Policies": [ Policies, ... ],
"UserName": String
}
}
For groups, it's a GroupName property:
"CFNUserGroup" : {
"Type" : "AWS::IAM::Group",
"Properties": {
"GroupName": "My_CFN_User_Group"
}
}