I'm trying to provision a new EC2 instance and attaching it to the existing targetGroup which I created earlier. I do have the targetGroupArn, and the new instance details. In all the documents, I could see the sample templates of creating a new instance and new targetgroup and referring the instance id to the creating targetgroup.
Is there any option to create only new instance and attaching it to the existing target group?
So that I can mention the targetGroupArn to which the new instances will be attaching.
Below I've mentioned the sample templates, what I'm looking for.
{
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"TargetGroupArn": "arn:aws:elasticloadbalancing:ap-south-1:4444444444:targetgroup/MyTG/56abcccccccc56",
"Targets": [
{
"Id": {
"Ref": "ec2Instance1"
}
}
]
}
}
Is there any option to create only new instance and attaching it to the existing target group?
Yes, you can do that through a custom resources in the form of a lambda function. The function would use AWS SDK, such as boto3, to attach the instances to the existing target group.
Related
I am setting up ECS Services to launch my application which speaks to the RDS Database server. I need to pass the Database access properties such as username, password, dbname etc to the application codes running in the FARGATE instances. So to pass them i have created these parameters in the parameter store, but i need to find a way to get them from the parameter store and pass it to the ECS task definitons env variable properties?
In the ECS Task definitions, i have tried to modify the JSON file environment property with the parameters such as "name: and "valueFrom", but seems that the "valueFrom:" is not being accepted in the JSON file, it pops out an error saying "Cannot read property 'replace' of undefined"
"environment": [
{
"name": "POSTGRES_DB",
"valueFrom": "PROD_POSTGRES_DB"
}
],
I expect that the POSTGRES_DB parameter reads the values from the PROD_POSTGRES_DB defined in parameter store of AWS
When you use SSM Parameter Store in ECS Task Definition for the valueFrom environment variables, it creates separate secrets section under containerDefinitions. So, it will look like below.
"containerDefinitions": [
{
"secrets": [
{
"name": "POSTGRES_DB",
"valueFrom": "PROD_POSTGRES_DB"
}
],
"environment": [
{
"valueFrom": "myKey",
"name": "myValue"
}
],
}
]
For the normal value environment variables, it will be usual environment json array.
Note -
When you use SSM Parameter Store, you have to make sure Task Execution Role has necessary SSM Permissions attached to role. Reference - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
Also, try to provide full SSM Parameter ARN if your ECS region is different from SSM region.
I have inherited an AWS account with a lot of resources. Some of them were created manually, other by CloudFormation.
How can I check if a resource (in my case Security Group) was created by CloudFormation and belongs to a stack?
For some security groups aws ec2 describe-security-groups --group-ids real_id results in:
...
"Tags": [
{
"Value": "REAL_NAME",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "arn:aws:cloudformation:<REAL_ID>",
"Key": "aws:cloudformation:stack-id"
},
]
...
Other security groups don't have any tags.
Is it the only indicator? I mean, someone could easily remove tags form an SG created by CloudFormation.
As per the official documentation, in addition to any tags you define, AWS CloudFormation automatically creates the following stack-level tags with the prefix aws::
aws:cloudformation:logical-id
aws:cloudformation:stack-id
aws:cloudformation:stack-name
All stack-level tags, including automatically created tags, are propagated to resources that AWS CloudFormation supports. Currently, tags are not propagated to Amazon EBS volumes that are created from block device mappings.
--
This should be a good place to start with but since CF doesn't enforce the stack state so if someone deleted something manually then you would never know.
If I were you, I would export everything (supported) via Cloudformer and re-design the whole setup my way.
Another way:
You can pass PhysicalResourceId of a resource to describe_stack_resources and get the stack information if it belongs to a CF stack. This is an example:
cf = boto3.client('cloudformation')
cf.describe_stack_resources(PhysicalResourceId="i-0xxxxxxxxxxxxxxxx")
https://boto3.readthedocs.io/en/latest/reference/services/cloudformation.html#CloudFormation.Client.describe_stack_resources
I had the same issue. After no luck finding an answer I made a quick PowerShell script that will just look for a resource name in all of the stacks.
When CF was introduced the stacks didn't tag resources and even now I have issues with CloudFormation reliably tagging resources, there are still times it will tag one resource and not tag another even with the same resource type and in the same stack. In addition some resources like CloudWatch Alarms don't have tags.
$resourceName = "*MyResource*" #Part of the resource name, surrounded by asterisks (*)
$awsProfile = "Dev" #AWS Profile to use
$awsRegion = "us-east-1" #Region to query
Get-CFNStack -ProfileName $awsProfile -Region $awsRegion | Get-CFNStackResourceList -ProfileName $awsProfile -Region $awsRegion | Where-Object {$_.PhysicalResourceId -ilike $resourceName} | Select-Object StackName,PhysicalResourceId
I use Deployment Manager and try to describe my resources in python files ( Deployment Manager allows to create configuration using Python or Jinja).
Actually,
I use json-format for topic-resource's creating -
return
{
'name': topic,
'type': 'pubsub.v1.topic',
'properties': {
'topic': topic
},
'accessControl': {
'gcpIamPolicy': {
'bindings': [
{
'role':
'roles/pubsub.publisher',
'members': [ 'service_account = project_name + '#gs-project-accounts.iam.gserviceaccount.com' ]
}
]
}
}
}
The format [project_name]#gs-project-accounts.iam.gserviceaccount.com worked fine several weeks ago but for new created project such service account is not found.
Is it correct that format of Google Cloud Storage service accounts was changed for a new created project it is failure service account ... doesn't exist? It was - [project-name]#gs-project-accounts.iam.gserviceaccount.com, and currently it is service-[projectId]#gs-project-accounts.iam.gserviceaccount.com.
I check it by this API and for special new-created projects I get - this format : service-[project_Id]#gs-project-accounts.iam.gserviceaccount.com.
How we can fetch the google cloud storage service account dynamically in Deployment Manager config files? As I can see here
there are only several environment variables like project_name, project_id, time etc.
and there isn't any storage_service_account environment variable
The GCS service account format recently changed to the following format:
service-[PROJECT_NUMBER]#gs-project-accounts.iam.gserviceaccount.com
Existing projects will continue to work with the previous format.
For new projects, the new format will be the way moving forward.
To verify format you can get projects.serviceAccount.
I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.
Is there any Azure Resource Template documentation? I am trying to recreate a VM using Resource Template and the only thing I am missing is creating a data disk from image the same way the OS disk is created. I edited the JSON template:
"dataDisks": [
{
"lun": 0,
"name": "[concat(parameters('virtualMachines_testVM_name'),'-disk-1')]",
"createOption": "FromImage",
"vhd": {
"uri": "[concat('https', '://', parameters('storageAccounts_rmtemplatetest6221copy_name'), '.blob.core.windows.net', concat('/vhds/', parameters('virtualMachines_testVM_name'),'-disk-1-201649102835.vhd'))]"
},
"caching": "ReadWrite"
}
]
But I get following error in Azure when deploying the template
Required parameter 'dataDisk.image' is missing
So far the only way I got to recreate the data disk was to delete above code from the JSON template and then use Powershell after the machine is created without the data disk, but I would like to automate deployment with resource template only.
In the Azure quick start templates you can find JSON template for creating VM using custom images, including Data disks:
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-user-image-data-disks
Just one very important note - the targed storage account should be same account where your VHDs reside.
There is no standing documentation on the JSON Schema. The best source is to check out the Schema itself, so:
https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json
http://schema.management.azure.com/schemas/2015-08-01/Microsoft.Compute.json#/resourceDefinitions/virtualMachine
UPDATE
When you create VM based on custom image, including data disks, you must create the entire VM in the same storage account where your custom data disks reside. There is no option, as of today (2016-05-10) to instruct ARM to copy VHDs across storage accounts.
This all was, if you want to create a VM from custom image with Data Disks.
If you just want to create the VM with new, empty data disks, then you can use the following quick start template:
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-multiple-data-disk
where you only define the desired size of the data disks and where they should be stored.
The problem you are having is that you have the template configured to make a copy of an image, and you have no image specified.
You need to either set the createOption to fromImage, and specify an image
"dataDisks": [
{
"name": "[concat(variables('vmName'),'-dataDisk')]",
"lun": 0,
"createOption": "FromImage",
"image": {
"uri": "[variables('dataDiskUrl')]"
},
"vhd": {
"uri": "[variables('dataDiskVhdName')]"
}
}
],
or, if you just want to use an existing disk, you can use attach, (you can also use empty in this configuration, and it will create an empty disk)
"dataDisks": [
{
"name": "[concat(variables('vmName'),'-dataDisk')]",
"lun": 0,
"createOption": "attach",
"vhd": {
"uri": "[variables('dataDiskVhdName')]"
}
}
],