I'm new to AWS CloudFormation, I've used it to deploy CloudTrail to a number of accounts without issue however I'm trying to use one centralised SNS Topic that each CloudTrail can use, if I edit via the CloudTrail GUI it works but I can't get CloudFormation to work.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Centralised CloudTrail
Resources:
CloudTrail:
Type: "AWS::CloudTrail::Trail"
Properties:
EnableLogFileValidation: 'false'
IncludeGlobalServiceEvents: 'true'
IsLogging: 'true'
IsMultiRegionTrail: 'true'
S3BucketName: company-cloudtrail-au
TrailName: Trail1
SnsTopicName: SNSTopic
I'm trying to use the ARN for the SNS Topic (example below) but I can't figure out how to tell CloudFormation to use the ARN instead of the Name, if I just put the name in then it creates a new SNS topic in each child account.
If I go to the GUI after CloudFormation is deployed I can then point it to the correct ARN and it's centralised but it's not ideal, let me know if SNS isn't meant to be shared across accounts or if there is a better way to do this.
Example ARN for SNS Topic -
arn:aws:sns:ap-southeast-1:1111111111:SNSTopic
The solution depends on how you have created the Centralized SNS. If it is created using CF template, you can export the ARN of the SNS topic as output. Ref CloudFormation Output
In the current stack where you are creating the CloudTrail you can refer to the SNS Topic's ARN using import option. Ref Fn::ImportValue.
(or)
If you have manually created the SNS Topic, you can still pass the ARN of the SNS as an parameter for the Cloud Formation create stack call. Ref Parameters.
Hope this helps.
there are number of ways to pass the Arn to the SnsTopicName:
If you have created the SNS Topic in the Template itself as - Type: AWS::SNS::Topic , then you can refer the ARN of the SNS Topic as
SnsTopicName: !GetAtt MY_SNS_Topic.Arn
If you have manually created the SNS Topic in the Console , then you can give the Arn as
SnsTopicName: !Sub "your-full-ARN-of-the-SNS-Topic"
You can pass the ARN from the Parameters with Default: value(arn)
Parameters:
SNSTopic:
Description: SNS Topic Name
Type: String
Default: arn:aws:sns:ap-southeast-1:1111111111:SNSTopic
##########
Properties:
SnsTopicName: !Ref SNSTopic
You can define the ARN in a separate config file and pass it as an parameter with the Deploy Function for the Template or while calling the Script to deploy the Template
Filename - configfile.conf
SNSTopic="arn:aws:sns:ap-southeast-1:1111111111:SNSTopic"
AWS Serverless Template
Parameters:
SNSTopic:
Type: String
##########
Properties:
SnsTopicName: !Sub "${SNSTopic}"
Related
HTTP APIs have OpenAPI specs (and Swagger UI)
Has anybody come across a technology or approach for maintaining live documentation of SNS topics and schemas?
Use CloudFormation templates to set up, deploy, and document your Amazon SNS configuration. For example, this is how you would describe a topic in SNS:
Type: AWS::SNS::Topic
Properties:
ContentBasedDeduplication: Boolean
DisplayName: String
FifoTopic: Boolean
KmsMasterKeyId: String
Subscription:
- Subscription
Tags:
- Tag
TopicName: String
Check more on this here.
You can also share your YAML or JSON CloudFormation templates, if needed.
Is that possible to use output as output.azure via file beat in the filebeat.yml file and input from a local file or we need Kafka as output to inject a logs to eventhub azure.
As far as I explored, only I can see kafka module as output one for sending logs to Azure EventHub
And storage account is necessary for eventhub injection or not ??
Thanks!
We use filebeat with event hubs via kafka surface. In your filebeat.yaml you use normal kafka output pointing to the event hubs instance. Your config would be something like this:
output.kafka:
topic: ${event_hub_connect_topic}
required_acks: 1
client_id: filebeat
version: '1.0.0'
hosts:
- ${event_hub_connect_hosts}
username: "$ConnectionString"
password: ${event_hub_connect_string}
ssl.enabled: true
compression: none
Note that you can use env vars in these settings. Since we use filebeat in k8s, we provide them through a secret.
As for the storage account, I believe you only need it if you want to use the capture feature of event hubs.
Whenever an S3 artifact is used, the following declaration is needed:
s3:
endpoint: s3.amazonaws.com
bucket: "{{workflow.parameters.input-s3-bucket}}"
key: "{{workflow.parameters.input-s3-path}}/scripts/{{inputs.parameters.type}}.xml"
accessKeySecret:
name: s3-access-user-creds
key: accessKeySecret
secretKeySecret:
name: s3-access-user-creds
key: secretKeySecret
It would be helpful if this could be abstracted to something like:
custom-s3:
bucket: "{{workflow.parameters.input-s3-bucket}}"
key: "{{workflow.parameters.input-s3-path}}/scripts/{{inputs.parameters.type}}.xml"
Is there a way to make this kind of custom definition in Argo to reduce boilerplate?
For a given Argo installation, you can set a default artifact repository in the workflow controller's configmap. This will allow you to only specify the key (assuming you set everything else in the default config - if not everything is defined for the default, you'll need to specify more things).
Unfortunately, that will only work if you're only using one S3 config. If you need multiple configurations, cutting down on boilerplate will be more difficult.
In response to your specific question: not exactly. You can't create a custom some-keyname (like custom-s3) as a member of the artifacts array. The exact format of the YAML is defined in Argo's Workflow Custom Resource Definition. If your Workflow YAML doesn't match that specification, it will be rejected.
However, you can use external templating tools to populate boilerplate before the YAML is installed in your cluster. I've used Helm before to do exactly that with a collection of S3 configs. At the simplest, you could use something like sed.
tl;dr - for one S3 config, use default artifact config; for multiple S3 configs, use a templating tool.
I have a bunch of cloud formation templates that have conditional resources in them for alerting. Only the prod stacks get these resources created. I need my IAM policy I am creating in the stack to reflect those conditional resources. So far I am not finding a way to do this. I have tried using Condition: in a separate policy document and it seems to ignore it.
I'd check out the Fn::If intrinsic function. It's really useful for stuff like this. For example, if I have an ShouldGenerateBucket condition, and two buckets, constant-bucket that will always be created and conditional-bucket that might be, I can use that in my policy like:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "RoleAccess"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "s3:*""
Resource:
- arn:aws:s3:::constant-bucket
- !If
- ShouldGenerateBucket
- arn:aws:s3:::conditional-bucket
- !Ref AWS::NoValue
This will add the additional resource resource if ShouldGenerateBucket is true, but ignore it otherwise.
I have a serverless function that looks like
functions:
ScooterExecution:
handler: ScooterExecution.hello
name: scooter-execution
memorySize: 256
timeout: 300
events:
- s3:
bucket: ScooterData
event: s3:ObjectCreated:*
The docs say that running this should create an s3 bucket and fire it whenever an object is created.
However, the template it creates makes no mention of an S3 bucket and does not create an s3 bucket named scooterdata nor attempt to register any triggers to the lambda.
Whats happening here?
You probably just missed some indentation in your serverless.yml file. The section under "s3" needs an extra indentation, otherwise, the event source isn't recognized.
service: aws-nodejs
provider:
name: aws
runtime: nodejs6.10
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: sample653536
event: s3:ObjectCreated:*
For framework version 3, if name does NOT respect s3 naming rules, sls deploy works, yet as you have mentioned, aws will not show it. I would consider this a silent failure, and not intuitive. If you were to do the same thing with the resource, sls would not do the cloudformation/deployment:
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: someDuplicateName
S3 bucket name rules