Creating cloudformation stacks for cloudwatch alarms - aws-cloudformation

I try to create multiple alarms(but same pattern) using cloudformation stacks.
Based on Parameter value, i.e) AlarmTypes : "a,b,c"
I need to create alarms like below
Alarm1 : AlarmName: a-alarm with MetricName: a-metric
Alarm2 : AlarmName: b-alarm with MetricName: b-metric
Alarm3 : AlarmName: c-alarm with MetricName: c-metric
Could someone help me with the best approach?

Related

grafana - data transformation from journald

I would like to clean up the data gathered by promtail. Specifically, I want grafana log dashboard to show only show SYSLOG_TIMESTAMP and MESSAGE fields. The problem is that grafana transform doesn't show fields that are otherwise detected by it. The query I'm using is simple - {name="promtailtest1"}. Any ideas where to start looking?
Detected fields by grafana transform:
Detected fields by grafana:
Log labels
job systemd-journal
name1 promtailtest1
Detected fields
MESSAGE "1"
PRIORITY "5"
SYSLOG_FACILITY "1"
SYSLOG_IDENTIFIER "promtailtest1"
SYSLOG_TIMESTAMP "Mar 1 11:15:25 "
_BOOT_ID "d5f4b43026124bccb1372918ff44fb70"
_GID "1000"
_HOSTNAME "pc"
_MACHINE_ID "cce4800beb84473b9cd93f8d6412880a"
_PID "1902474"
_SELINUX_CONTEXT "unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
_SOURCE_REALTIME_TIMESTAMP "1646126125653980"
_TRANSPORT "syslog"
_UID "1000"
ts 2022-03-01T09:15:25.654Z
tsNs 1646126125654005000

How to access CloudWatch Event data from triggered Fargate task?

I read the docs on how to Run an Amazon ECS Task When a File is Uploaded to an Amazon S3 Bucket. However, this document stops short of explaining how to get the bucket/key values from the triggering event from within the Fargate task code itself. How can that be done?
I am not sure if you still need the answer for this one. But I did something similar to what Steven1978 mentioned but only using CloudFormation.
The config you're looking for is the InputTransformer. Check this example for a YAML CloudFormation template for an Event Rule:
rEventRuleForFileUpload:
Type: AWS::Events::Rule
Properties:
Description: "EventRule"
State: "ENABLED"
EventPattern:
source:
- "aws.s3"
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- "PutObject"
- "CompleteMultipartUpload"
requestParameters:
bucketName: "{YOUR_BUCKET_NAME}"
Targets:
- Id: '{YOUR_ECS_CLUSTER_ID}'
Arn: !Sub "arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:cluster/${NAME_OF_YOUR_CLUSTER_RESOURCE}"
RoleArn: !GetAtt {YOUR_ROLE}.Arn
EcsParameters:
TaskCount: 1
TaskDefinitionArn: !Ref {YOUR_TASK_DEFINITION}
LaunchType: FARGATE
{... WHATEVER CONFIG YOU MIGHT HAVE...}
InputTransformer:
InputPathsMap:
s3_bucket: "$.detail.requestParameters.bucketName"
s3_key: "$.detail.requestParameters.key"
InputTemplate: '{ "containerOverrides": [ { "name": "{THE_NAME_OF_YOUR_CONTAINER_DEFINITION}", "environment": [ { "name": "EVENT_BUCKET", "value": <s3_bucket> }, { "name": "EVENT_OBJECT_KEY", "value": <s3_key> }] } ] }'
With this approach, you'll be able to get the s3 bucket name (EVENT_BUCKET) and the s3 object key (EVENT_OBJECT_KEY) as environment variables inside your container.
The info isn't very clear, indeed, but here are some sources I used to finally get it working:
Container Override;
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html
InputTransformer:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html#API_InputTransformer_Contents

Get Lambda Arn into Resources : Type: AWS::Lambda::Permission

I have the following in my serverless yml file :
lambdaQueueFirstInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: ServiceLambdaFunctionQualifiedArn
Action: ‘lambda:InvokeFunction’
Principal: sqs.amazonaws.com
and I have the following in the Outputs section :
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value:
‘Fn::GetAtt’: [ lambdaQueueFirst, Arn ]
this comes back with a message:
Template error: instance of Fn::GetAtt references undefined resource lambdaQueueFirst
Am I missing something and if so, what? since it is very little in terms of help or examples…
Also, is there a better of getting the lambda arn into the permissions code? if so, what is it?
You can use the environment variables to construct the ARN value. In your case, you can define a variable in your provider section like below. You might need to modify a little bit according to your application.
service: serverless App2
provider:
name: aws
runtime: python3.6
region: ap-southeast-2
stage: dev
environment:
AWS_ACCOUNT: 1234567890 # use your own AWS ACCOUNT number here
# define the ARN of the function that you want to invoke
FUNCTION_ARN: "arn:aws:lambda:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:function:${self:service}-${self:provider.stage}-lambdaQueueFirst"
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value: "${self:provider.environment.FUNCTION_ARN}"
See this and serverless variables for aws for example.
you can do this:
resources:
Resources:
LoggingLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: { "Fn::GetAtt": ["LoghandlerLambdaFunction", "Arn" ] }
Action: lambda:InvokeFunction
Principal: { "Fn::Join" : ["", ["logs.", { "Ref" : "AWS::Region"}, ".amazonaws.com" ] ] }
reference:
https://github.com/andymac4182/serverless_example

Pulumi invalid network configuration for bridge mode ECS service

I'm trying to create an ECS Service using Pulumi with task with network mode bridge in order to run multiple tasks on an instance.
When creating the service, pulumi outputs error: Plan apply failed: InvalidParameterException: Network Configuration is not valid for the given networkMode of this task definition. which is not valid.
It seems pulumi provides a networkConfiguration even though this is not permitted when the network mode is bridge:
[urn=urn:pulumi:dev::pulumi::pulumi:pulumi:Stack::pulumi-dev]
+ aws:ecs/service:Service: (create)
[urn=urn:pulumi:dev::pulumi::awsx:x:ecs:EC2Service$aws:ecs/service:Service::test]
cluster : "arn:aws:ecs:eu-central-1:131009595785:cluster/test-12196f9"
deploymentMaximumPercent : 200
deploymentMinimumHealthyPercent: 100
desiredCount : 2
enableEcsManagedTags : false
launchType : "EC2"
loadBalancers : [
[0]: {
containerName : "backend"
containerPort : 3000
targetGroupArn: "arn:aws:elasticloadbalancing:eu-central-1:131009595785:targetgroup/57d096ee-73ab93e/fce1408d3c067066"
}
]
name : "test-3e870ec"
networkConfiguration : {
assignPublicIp: false
securityGroups: [
[0]: "sg-035513ef294414b65"
]
subnets : [
[0]: "subnet-08831ff5642406fc7"
[1]: "subnet-00e3e870707b6aa90"
]
}
schedulingStrategy : "REPLICA"
taskDefinition : "arn:aws:ecs:eu-central-1:131009595785:task-definition/test-aece9bcd:24"
waitForSteadyState : true
Is there a way to avoid setting the networkConfiguration? I can set securityGroups and subnets of the service to [] but there is no way to set assignPublicIp.
Looks like this was not yet supported by pulumi but was fixed in PR 233 with this change.
The fix is included in pulumi-awsx 0.18.2.
A networkConfiguration is now only specified for network mode awsvpc.

Mappings sections of the aws Cloudformation template and Serverless.yml

I have a little doubt about "Mappings section" of the aws cloudformation syntax:
Example:
...
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: Fn::FindInMap
- accounts
- Ref "AWS::AccountId"
...
Could the Mappings section be included in a serverless.yml file ?
I meant, eventhough it is a valid cloudformation syntax, would it possible include it in the serverless.yml, so that later we can implement it (serverless | sls deploy ...)?
thanks,
You might be able to use:
functions:
# ...
resources:
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
Just another way to work with mapping is through stage params.
https://www.serverless.com/framework/docs/guides/parameters
params:
stage1:
schedule:true
stage2:
schedule:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: ${param:schedule}
Then call adding the stage arg (default is dev)
serverless deploy --stage stage1