How to access CloudWatch Event data from triggered Fargate task? - amazon-ecs

I read the docs on how to Run an Amazon ECS Task When a File is Uploaded to an Amazon S3 Bucket. However, this document stops short of explaining how to get the bucket/key values from the triggering event from within the Fargate task code itself. How can that be done?

I am not sure if you still need the answer for this one. But I did something similar to what Steven1978 mentioned but only using CloudFormation.
The config you're looking for is the InputTransformer. Check this example for a YAML CloudFormation template for an Event Rule:
rEventRuleForFileUpload:
Type: AWS::Events::Rule
Properties:
Description: "EventRule"
State: "ENABLED"
EventPattern:
source:
- "aws.s3"
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- "PutObject"
- "CompleteMultipartUpload"
requestParameters:
bucketName: "{YOUR_BUCKET_NAME}"
Targets:
- Id: '{YOUR_ECS_CLUSTER_ID}'
Arn: !Sub "arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:cluster/${NAME_OF_YOUR_CLUSTER_RESOURCE}"
RoleArn: !GetAtt {YOUR_ROLE}.Arn
EcsParameters:
TaskCount: 1
TaskDefinitionArn: !Ref {YOUR_TASK_DEFINITION}
LaunchType: FARGATE
{... WHATEVER CONFIG YOU MIGHT HAVE...}
InputTransformer:
InputPathsMap:
s3_bucket: "$.detail.requestParameters.bucketName"
s3_key: "$.detail.requestParameters.key"
InputTemplate: '{ "containerOverrides": [ { "name": "{THE_NAME_OF_YOUR_CONTAINER_DEFINITION}", "environment": [ { "name": "EVENT_BUCKET", "value": <s3_bucket> }, { "name": "EVENT_OBJECT_KEY", "value": <s3_key> }] } ] }'
With this approach, you'll be able to get the s3 bucket name (EVENT_BUCKET) and the s3 object key (EVENT_OBJECT_KEY) as environment variables inside your container.
The info isn't very clear, indeed, but here are some sources I used to finally get it working:
Container Override;
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html
InputTransformer:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html#API_InputTransformer_Contents

Related

AWS CodePipeline GitHub webhook can not be registered with GitHub if repo is an organisation repository

When I set up the hook using the console it works, but when I try to do it using cloudformation it never works. It does not even work if I use the AWS CLI version:
aws codepipeline register-webhook-with-third-party --webhook-name AppPipelineWebhook-aOnbonyFrNZu
This is how my webhook looks like (output from "aws codepipeline list-webhooks"):
{
"webhooks": [
{
"definition": {
"name": "AppPipelineWebhook-aOnbonyFrNZu",
"targetPipeline": "ftp-proxy-cf",
"targetAction": "GitHubAction",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
],
"authentication": "GITHUB_HMAC",
"authenticationConfiguration": {
"SecretToken": "<REDACTED>"
}
},
"url": "https://eu-west-1.webhooks.aws/trigger?t=eyJ<ALSO REDACTED>F9&v=1",
"arn": "arn:aws:codepipeline:eu-west-1:<our account ID>:webhook:AppPipelineWebhook-aOnbonyFrNZu",
"tags": []
}
]
}
The error I get is:
An error occurred (ValidationException) when calling the RegisterWebhookWithThirdParty operation: Webhook could not be registered with GitHub. Error cause: Not found [StatusCode: 404, Body: {"message":"Not Found","documentation_url":"https://developer.github.com/v3/repos/hooks/#create-a-hook"}]
These are the two relevant sections from my cloudformation file:
Resources:
AppPipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}'
Filters:
- JsonPath: $.ref
MatchEquals: 'refs/heads/{Branch}'
TargetPipeline: !Ref CodePipeline
TargetAction: GitHubAction
TargetPipelineVersion: !GetAtt CodePipeline.Version
# RegisterWithThirdParty: true
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name:
Ref: PipelineName
RoleArn: !GetAtt CodePipelineServiceRole.Arn
Stages:
- Name: Source
Actions:
- Name: GitHubAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: myorganisationnameongithub
Repo: ftp-proxy
Branch: master
OAuthToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}'
PollForSourceChanges: false
It can poll changes all right. So if I manually order an execution of the GitHubAction stage from the AWS Console, the latest commits are downloaded. And if I set PollForSourceChanges: true, that kind of polling also works, but alas not the webhook workflow (because the hook can not be registered with GitHub)
The error is observed due to (2) possible causes:
The Personal Access Token (PAT) is not configured to have the following GitHub scopes: admin:repo_hook and admin:org_hook 1
You can verify these permissions under 'User' (Top RIght) > 'Settings' > 'Developer Settings' > 'Personal Access Tokens'
'Owner' and/or 'Repository' name are incorrect in the CloudFormation template:
For the Pipeline Configuration in CloudFormation, make sure 'GitHubOwner' is the 'Organization name' and repository name is just the repo name and does not have a "org/repo_name" in it, e.g. in your case:
Example:
Configuration:
Owner: !Ref GitHubOwner <========== Github org name
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken <========== <Personal Access Token>

Google Cloud Deployment ,invalid_argument

I'm trying to create a cloud SQL instance by deployment API, when I try to create it directly from YAML file it is created successfully ,meanwhile when I create the instance from jinja/python file I get an error as below:
code: RESOURCE_ERROR
location: /deployments/olpr/resources/test
message: '{"ResourceType":"sqladmin.v1beta4.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/project_id/instances","httpMethod":"POST"}}'
Is there any way where I can see the invalid_argument so that I can fix it.
Please help me with some valid suggestions.
The resource as below:
*resources = [
{
'name': 'test',
'type': 'sqladmin.v1beta4.instance',
'properties': {
'zone': 'europe-west1-b',
'rootPassword': '1234567' ,
'instanceType': 'CLOUD_SQL_INSTANCE',
'databaseVersion': 'SQLSERVER_2017_EXPRESS',
'backendType': 'SECOND_GEN',
'settings':{
'machineType' : 'db-custom-1-3840',
'dataDiskSizeGb': 10,
'dataDiskType': 'PD_SSD',
'ipConfiguration': {
'ipv4Enabled': False,
'privateNetwork':'projects/project_id/global/networks/project_id-vpc'
}
}
}
}
]*
**
**Yaml file:
resources:
- name: he
type: sqladmin.v1beta4.instance
properties:
region: europe-west1
zone: europe-west1-b
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: SQLSERVER_2017_EXPRESS
serviceAccountEmailAddress: user#project_id.iam.gserviceaccount.com
rootPassword: mypass
settings:
dataDiskSizeGb: 10
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: false
privateNetwork: vpc
kind: sql#settings
machineType: db-custom-1-3840**
**
You're not supplying a region in the Python version. Try adding `'region': 'europe-west1' to the properties.

Get Lambda Arn into Resources : Type: AWS::Lambda::Permission

I have the following in my serverless yml file :
lambdaQueueFirstInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: ServiceLambdaFunctionQualifiedArn
Action: ‘lambda:InvokeFunction’
Principal: sqs.amazonaws.com
and I have the following in the Outputs section :
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value:
‘Fn::GetAtt’: [ lambdaQueueFirst, Arn ]
this comes back with a message:
Template error: instance of Fn::GetAtt references undefined resource lambdaQueueFirst
Am I missing something and if so, what? since it is very little in terms of help or examples…
Also, is there a better of getting the lambda arn into the permissions code? if so, what is it?
You can use the environment variables to construct the ARN value. In your case, you can define a variable in your provider section like below. You might need to modify a little bit according to your application.
service: serverless App2
provider:
name: aws
runtime: python3.6
region: ap-southeast-2
stage: dev
environment:
AWS_ACCOUNT: 1234567890 # use your own AWS ACCOUNT number here
# define the ARN of the function that you want to invoke
FUNCTION_ARN: "arn:aws:lambda:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:function:${self:service}-${self:provider.stage}-lambdaQueueFirst"
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value: "${self:provider.environment.FUNCTION_ARN}"
See this and serverless variables for aws for example.
you can do this:
resources:
Resources:
LoggingLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: { "Fn::GetAtt": ["LoghandlerLambdaFunction", "Arn" ] }
Action: lambda:InvokeFunction
Principal: { "Fn::Join" : ["", ["logs.", { "Ref" : "AWS::Region"}, ".amazonaws.com" ] ] }
reference:
https://github.com/andymac4182/serverless_example

Get TargetGroupArn from name?

You use TargetGroupArn in a CF template for ECS services. I have a situation where the target group has already been created and I want to make this a param for the template
But those arn's are awful:
arn:aws:elasticloadbalancing:us-east-1:123456:targetgroup/mytarget/4ed48ba353064a79
That unique number at the end makes this almost impossible. Can I reference the target by name instead of full arn in the template?
Maybe i can use Fn::GetAtt here but not sure what that looks like
This doesn't work:
TargetGroupArn: !GetAtt mytarget.TargetGroupName
I get error: An error occurred (ValidationError) when calling the CreateChangeSet operation: Template error: instance of Fn::GetAtt references undefined resource mytarget
Unfortunately with Target Groups, you won't be able to use convention to determine it's ARN due to the extra string at the end.
If the Target Group was created in Cloudformation, it's easy enough to get the ARN output by using !Ref myTargetGroup.
If the Target Group was created in another CF stack, try Exporting the Target Group ARN and use Fn::ImportValue when creating the ECS Service to input the Target Group ARN.
Type: "AWS::ECS::Service"
Properties:
...
LoadBalancers:
- ContainerName: MyContainer
ContainerPort: 1234
TargetGroupArn: !ImportValue myExportedTargetGroupARN
...
If you want to use the available Target-group, You pass the target group name as the default parameter to the Service CF template.
Internally refer the default parameter as the ref to the TargetGroupArn in the Action section of the LiestnerRule It will get the target group ARN.
Check this link: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html
{
"Parameters": {
"VPC": {
...
"TargetGroup": {
"Description": "TargetGroup name for ListenerRule",
"Type": "String",
"Default": "my-target"
}
},
"Resources": {
"Service": {
"TaskDefinition": {
....
"ListenerRule": {
....
"Actions": [
{
"TargetGroupArn": {
"Ref": "TargetGroup"
},
"Type": "forward"
}
]
}
},
"ServiceRole": {
}
}

!ImportValue in Serverless Framework not working

I'm attempting to export a DynamoDb StreamArn from a stack created in CloudFormation, then reference the export using !ImportValue in the serverless.yml.
But I'm getting this error message:
unknown tag !<!ImportValue> in "/codebuild/output/src/serverless.yml"
The cloudformation and serverless.yml are defined as below. Any help appreciated.
StackA.yml
AWSTemplateFormatVersion: 2010-09-09
Description: Resources for the registration site
Resources:
ClientTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: client
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
Outputs:
ClientTableStreamArn:
Description: The ARN for My ClientTable Stream
Value: !GetAtt ClientTable.StreamArn
Export:
Name: my-client-table-stream-arn
serverless.yml
service: my-service
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
- dynamodb:GetItem
- dynamodb:PutItem
Resource: arn:aws:dynamodb:*:*:table/client
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn: !ImportValue my-client-table-stream-arn
batchSize: 1
Solved by using ${cf:stackName.outputKey}
I struggled with this as well, and what did trick for me was:
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn:
!ImportValue my-client-table-stream-arn
batchSize: 1
Note, that intrinsic functions ImportValue is on a new line and indented, otherwise the whole event is ignored when cloudformation-template-update-stack.json is generated.
It appears that you're using the !ImportValue shorthand for CloudFormation YAML. My understanding is that when CloudFormation parses the YAML, and !ImportValue actually aliases Fn::ImportValue. According to the Serverless Function documentation, it appears that they should support the Fn::ImportValue form of imports.
Based on the documentation for Fn::ImportValue, you should be able to reference the your export like
- stream:
type: dynamodb
arn: {"Fn::ImportValue": "my-client-table-stream-arn"}
batchSize: 1
Hope that helps solve your issue.
I couldn't find it clearly documented anywhere but what seemed to resolve the issue for me is:
For the Variables which need to be exposed/exported in outputs, they must have an "Export" property with a "Name" sub-property:
In serverless.ts
resources: {
Resources: resources["Resources"],
Outputs: {
// For eventbus
EventBusName: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME",
},
Value: {
Ref: "UNIQUE_EVENTBUS_NAME",
},
},
// For something like sqs, or anything else, would be the same
IDVerifyQueueARN: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_SQS_NAME",
},
Value: { "Fn::GetAtt": ["UNIQUE_SQS_NAME", "Arn"] },
}
},
}
Once this is deployed you can check if the exports are present by running in the terminal (using your associated aws credentials):
aws cloudformation list-exports
Then there should be a Name property in a list:
{
"ExportingStackId": "***",
"Name": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
"Value": "***"
}
And then if the above is successful, you can reference it with "Fn::ImportValue" like so, e.g.:
"Resource": {
"Fn::ImportValue": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
}