How to create a parent resource in AWS API Gateway? - aws-api-gateway

When using the AWS Api-Gateway service, I'd like to add a "parent" resource without deleting and rebuilding the resource structure. Specifically, I'd like to change this:
resource/name
resource/name
And add a "parent" resource to it (v1) without deleting and remaking the two "resource/name" resources, like this:
/v1
/resource/name
/resource/name
If it requires use of the CLI, what would an example command look like?
UPDATE:
Thanks for the great answer Ka Hou Ieong. Here are some notes on implementing it:
rest-api-id : Put the api id here. You can look it up with this command: aws apigateway get-rest-apis
resource-id : Put the id of the resource you'd like to move here. You can look it up with this command: aws apigateway get-resources --rest-api-id API-ID-HERE
replace : Leave this; it's the operation.
/parentId : Leave this. It refers to the "key" of the value that you'll replace.
<new parent resourceId> : Replace this with the ID of the parent you'd like.

You can create a resource with path part "/v1", then re-parent these resources by using the CLI tool or SDK.
Example cli command to reparent the resource
aws apigateway update-resource \
--rest-api-id rest-api-id \
--resource-id resource-id \
--cli-input-json "{\"patchOperations\" : [
{
\"op\" : \"replace\",
\"path\" : \"/parentId\",
\"value\" : \"<new parent resourceId>\"
}
]}"
Here is the cli tool documentation: http://docs.aws.amazon.com/cli/latest/reference/apigateway/update-resource.html
Here is the API reference: http://docs.aws.amazon.com/apigateway/api-reference/link-relation/resource-update/

Related

aws api gateway adding basePath using openApi 3.0.3

Hey I m using openapi version 3.0.3 to add restApi's in Aws gateway. The problem is I have to add a basePath for all my Api's.
The docs says "If the API doesn't contain any basePath variables, the Import API feature checks the server.url string to see if it contains a path beyond "/". If it does, that path is used as the base path".
I tried adding the following basePath in server.url
servers :[
{
"url" : "/foo"
}
]
but the basePath is not getting reflected am I missing something here?

How to pass API parameters to GCP cloud build triggers

I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run

How can I check if a resource was created by CloudFormation?

I have inherited an AWS account with a lot of resources. Some of them were created manually, other by CloudFormation.
How can I check if a resource (in my case Security Group) was created by CloudFormation and belongs to a stack?
For some security groups aws ec2 describe-security-groups --group-ids real_id results in:
...
"Tags": [
{
"Value": "REAL_NAME",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "arn:aws:cloudformation:<REAL_ID>",
"Key": "aws:cloudformation:stack-id"
},
]
...
Other security groups don't have any tags.
Is it the only indicator? I mean, someone could easily remove tags form an SG created by CloudFormation.
As per the official documentation, in addition to any tags you define, AWS CloudFormation automatically creates the following stack-level tags with the prefix aws::
aws:cloudformation:logical-id
aws:cloudformation:stack-id
aws:cloudformation:stack-name
All stack-level tags, including automatically created tags, are propagated to resources that AWS CloudFormation supports. Currently, tags are not propagated to Amazon EBS volumes that are created from block device mappings.
--
This should be a good place to start with but since CF doesn't enforce the stack state so if someone deleted something manually then you would never know.
If I were you, I would export everything (supported) via Cloudformer and re-design the whole setup my way.
Another way:
You can pass PhysicalResourceId of a resource to describe_stack_resources and get the stack information if it belongs to a CF stack. This is an example:
cf = boto3.client('cloudformation')
cf.describe_stack_resources(PhysicalResourceId="i-0xxxxxxxxxxxxxxxx")
https://boto3.readthedocs.io/en/latest/reference/services/cloudformation.html#CloudFormation.Client.describe_stack_resources
I had the same issue. After no luck finding an answer I made a quick PowerShell script that will just look for a resource name in all of the stacks.
When CF was introduced the stacks didn't tag resources and even now I have issues with CloudFormation reliably tagging resources, there are still times it will tag one resource and not tag another even with the same resource type and in the same stack. In addition some resources like CloudWatch Alarms don't have tags.
$resourceName = "*MyResource*" #Part of the resource name, surrounded by asterisks (*)
$awsProfile = "Dev" #AWS Profile to use
$awsRegion = "us-east-1" #Region to query
Get-CFNStack -ProfileName $awsProfile -Region $awsRegion | Get-CFNStackResourceList -ProfileName $awsProfile -Region $awsRegion | Where-Object {$_.PhysicalResourceId -ilike $resourceName} | Select-Object StackName,PhysicalResourceId

CodePipeline CloudFormation Template configuration

I'm trying to use the CloudFormation Template configuration field in a CodePipeline. If you edit the CloudFormation in CodePipeline it looks like this:
If my InputArtifactName is MyAppBuild and I have a CloudFormation config file in cfg-prd.json, my hope was I could enter MyAppBuild::cfg-prd.json and have it pick it up.
I get an error about the template file not being valid even though it works manually as:
--parameters cfg-prd.json
Note that the Template Configuration File has a different JSON structure than the format accepted by the --parameters option to aws cloudformation create-stack:
{
"Parameters" : {
"NameOfTemplateParameter" : "ValueOfParameter",
...
},
"StackPolicy" : {
"Statement" : [
StackPolicyStatement
]
}
}
A nice trick to see what the Cloudformation layout should be is to create a pipeline which you know will work. Then use the CLI to extract a cloudformation template.
aws codepipeline get-pipeline --name <pipelinename>
You will get the JSON of the Codepipeline resource. There will be a few changes required but from this you can see exactly what the syntax should be and from there parameterised your template and create Codepipelines programatically.

How to Create Dynamodb Global Secondary Index using AWS CLI?

The AWS CLI for Dynamodb create-table is a little bit confusion when it comes to create global secondary index. In the CLI document, it says global secondary index could be expressed with the following expression (shorthand):
IndexName=string,KeySchema=[{AttributeName=string,KeyType=string},{AttributeName=string,KeyType=string}],Projection={ProjectionType=string,NonKeyAttributes=[string,string]},ProvisionedThroughput={ReadCapacityUnits=long,WriteCapacityUnits=long} ...
My interpretation is, I should do
--global-secondary-indexes IndexName=requesterIndex,Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}
Note that I am not including KeySchema here to deduce complexity. The console gives me the following error:
Parameter validation failed:
Missing required parameter in GlobalSecondaryIndexes[0]: "KeySchema"
Unknown parameter in GlobalSecondaryIndexes[0]: "WriteCapacityUnits", must be one of: IndexName, KeySchema, Projection, ProvisionedThroughput
Invalid type for parameter GlobalSecondaryIndexes[0].ProvisionedThroughput, value: ReadCapacityUnits=1, type: <class 'str'>, valid types: <class 'dict'>
So somehow AWS CLI does not recognize the map expression for ProvisionedThroughput. I tried several ways to express it and could not make it work. I also failed to find any web page in Google describing how to do it.
This is the cli call I used to create the Reply sample in the aws documentation from the command line. The $EP i used at the end can be set in the environment to EP="--endpoint-url http://localhost:8000" to create the table on your local dynamodb instead of aws.
aws dynamodb create-table --table-name Reply --attribute-definitions \
AttributeName=Id,AttributeType=S AttributeName=ReplyDateTime,AttributeType=S \
AttributeName=PostedBy,AttributeType=S AttributeName=Message,AttributeType=S \
--key-schema AttributeName=Id,KeyType=HASH \
AttributeName=ReplyDateTime,KeyType=RANGE --global-secondary-indexes \
IndexName=PostedBy-Message-Index,KeySchema=["\
{AttributeName=PostedBy,KeyType=HASH}","\
{AttributeName=Message,KeyType=RANGE}"],Projection="{ProjectionType=INCLUDE \
,NonKeyAttributes=["ReplyDateTime"]}",ProvisionedThroughput="\
{ReadCapacityUnits=10,WriteCapacityUnits=10}" --provisioned-throughput \
ReadCapacityUnits=5,WriteCapacityUnits=4 $EP
Read through AWS CLI source code on Github, it could parse double quote content. So adding double quote in the script solved the issue. There is the new code -
--global-secondary-indexes IndexName=requesterIndex,Projection={ProjectionType=ALL},ProvisionedThroughput="{ReadCapacityUnits=${CURRENT_READUNIT},WriteCapacityUnits=${CURRENT_WRITEUNIT}}"
Define the table structure in a JSON file, including the index structures. Use following to create a template structure.
aws dynamodb create-table --generate-cli-skeleton
Run the cli command with the table definition input json
aws dynamodb create-table --cli-input-json file://path-to-yourtable-definition.json