How to Include/Reference Multiple Resource files in CloudFormation? - aws-cloudformation

I am new to cloud formation templates. I am trying to organize the templates based on AWS services so that I can easily manage it. For example, iam-roles in one file, dynamodb tables on another, s3, lambda resources in individual files. When I try to make a master file from these partials I am able to include only one partial in the resource section on Fn::Transform. I need two suggestions, am I going in the right direction? and how to include all the partials into my main.yml?
Resources:
"Fn::Transform":
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub: "s3://s3url/iam-roles.yml"
"Fn::Transform":
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub: "s3://s3url/ddbtables.yml"
The above code definitely throws an error. How to merge these partials?

From the images you have shared in the question, the templates you have created have to be included using AWS::CloudFormation::Stack.
Fn::Transform is a macro which is used to do preprocessing of a cloudformation templates. For example, we can do search and replace of some strings in the cloudformation templates using Macros.

Related

Differences in designing a serverless function vs. regular server

I am wondering what approach in designing serverless functions to take, while taking designing a regular server as a point of reference.
With a traditional server, one would focus on defining collections and then CRUD operations one can run on each of them (HTTP verbs such as GET or POST).
For example, you would have a collection of users, and you can get all records via app.get('/users', ...), get specific one via app.get('/users/{id}', ...) or create one via app.post('/users', ...).
How differently would you approach designing a serverless function? Specifically:
Is there a sense in differentiating between HTTP operations or would you just go with POST? I find it useful to have them defined on the client side, to decide if I want to retry in case of an error (if the operation is idempotent, it will be safe to retry etc.), but it does not seem to matter in the back-end.
Naming. I assume you would use something like getAllUsers() when with a regular server you would define collection of users and then just use GET to specify what you want to do with it.
Size of functions: if you need to do a number of things in the back-end in one step. Would you define a number of small functions, such as lookupUser(), endTrialForUser() (fired if user we got from lookupUser() has been on trial longer than 7 days) etc. and then run them one after another from the client (deciding if trial should be ended on the client - seems quite unsafe), or would you just create a getUser() and then handle all the logic there?
Routing. In serverless functions, we can't really do anything like .../users/${id}/accountData. How would you go around fetching nested data? Would you just return a complete JSON every time?
I have been looking for some comprehensive articles on the matter but no luck. Any suggestions?
This is a very broad question that you've asked. Let me try answering it point by point.
Firstly, the approach that you're talking about here is the Serverless API project approach. You can clone their sample project to get a better understanding of how you can build REST apis for performing CRUD operations. Start by installing the SAM cli and then run the following commands.
$ sam init
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice: 1
Cloning from https://github.com/aws/aws-sam-cli-app-templates
Choose an AWS Quick Start application template
1 - Hello World Example
2 - Multi-step workflow
3 - Serverless API
4 - Scheduled task
5 - Standalone function
6 - Data processing
7 - Infrastructure event management
8 - Machine Learning
Template: 3
Which runtime would you like to use?
1 - dotnetcore3.1
2 - nodejs14.x
3 - nodejs12.x
4 - python3.9
5 - python3.8
Runtime: 2
Based on your selections, the only Package type available is Zip.
We will proceed to selecting the Package type as Zip.
Based on your selections, the only dependency manager available is npm.
We will proceed copying the template using npm.
Project name [sam-app]: sample-app
-----------------------
Generating application:
-----------------------
Name: sample-app
Runtime: nodejs14.x
Architectures: x86_64
Dependency Manager: npm
Application Template: quick-start-web
Output Directory: .
Next steps can be found in the README file at ./sample-app/README.md
Commands you can use next
=========================
[*] Create pipeline: cd sample-app && sam pipeline init --bootstrap
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
Comings to your questions point wise:
Yes, you should differentiate your HTTP operations with their suitable HTTP verbs. This can be configured at the API Gateway and can be checked for in the Lambda code. Check the source code of handlers & the template.yml file from the project you've just cloned with SAM.
// src/handlers/get-by-id.js
if (event.httpMethod !== 'GET') {
throw new Error(`getMethod only accepts GET method, you tried: ${event.httpMethod}`);
}
# template.yml
Events:
Api:
Type: Api
Properties:
Path: /{id}
Method: GET
The naming is totally up to the developer. You can follow the same approach that you're following with your regular server project.
You can define the handler with name getAllUsers or users and then set the path of that resource to GET /users in the AWS API Gateway. You can choose the HTTP verbs of your desire. Check this tutorial out for better understanding.
Again this up to you. You can create a single Lambda that handles all that logic or create individual Lambdas that are triggered one after another by the client based on the response from the previous API. I would say, create a single Lambda and just return the cumulative response to reduce the number of requests. But again, this totally depends on the UI integration. If your screens demand separate API calls, then please, by all means create individual lambdas.
This is not true. We can have dynamic routes specified in the API Gateway.
You can easily set wildcards in your routes by using {variableName} while setting the routes in API Gateway.
GET /users/{userId}
The userId will then be available at your disposal in the lambda function via event.pathParameters.
GET /users/{userId}?a=x
Similarly, you could even pass query strings and access them via event.queryStringParameters in code. Have a look at working with routes.
Tutorial I would recommend for you:
Tutorial: Build a CRUD API with Lambda and DynamoDB

Config server with Vault backend - fetch secrets from multiple paths

We are using config server with Vault backend to fetch application secrets.
Config server project is using spring-vault-core dependency and spring-vault-dependencies dependency management for Vault.
Vault related config in application yml file is as follows:
spring:
cloud:
config:
server:
vault:
order: 0
uri: <complete URI>
connection-timeout: 5000
read-timeout: 15000
kvVersion: 2
backend: secret
defaultKey: config
This works fine and fetches me the Vault secrets in secret/config.
I am unable to add secret fetching from multiple paths in Vault (secret/config + secret/customFolder). I have tried adding comma separated application-name etc as suggested across various posts but does not work. Has anyone tried something similar?
You can take a look to the composite profile.
There are a lot of additional questions - what exactly you are trying to do, and why do you want to have this?
For us, for example, it was important to split infra services configurations and also split, actually, microservices configurations by itself. And, important requirement, to be able to "overwrite" it (in case of migrations, for instance).
We have achieve that with two things:
on config server side we are using composite configuration (with exactly the same type and uri, but little bit different backend and keys),
on config client's side we are specifying several values for spring.cloud.config.name property (coma separated list).

setup an aws api gatway with serverless

I built out my dev environment manually, I wanted to focus on logic and skip the learning curve on serverless, but before deploying to prod I want to standardize and parameterize my stack.
setting up my dynamodb tables has been straight forward, but I'm running into snags with deploying a new api-gateway.
I've been using aws codebuild to package layers for lambda functions and an s3 bucket to store my lambda code.
Let's take my dev-rest-auth api (custom authentication) as an example.
I have several resources for login/out, passwords and registration; most are protected by a custom authorizer (login/logout aren't) and all have cors policies. I'm using a custom domain account-api.dev.example.com I use several dynamodb tables for housing user data (let's avoid security discussions please, I'm not storing raw passwords and am encrypting using leading industry standards) and temporary codes (password reset & account verification).
to test serverless implementation I'd like to build a yaml file that recreates my existing infrastructure - so the first question is -- is that possible? Can I parameterize the deployment of an API gateway, with custom authorizer, custom domain, and several lambdas?
Next question is how?
Organizationally I'm breaking out my yml files by resource:
I have several dynamodb yml files that look like this:
Resources:
UserTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: ${self:custom.resource-prefix}-UserTable-${self:custom.stage}
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
This was a much earlier attempt (several months ago, from googling, but I don't remember where I found it or what it does) of standing up an API gateway:
Resources:
SharedGW:
Type: AWS::ApiGateway::RestApi
Properties:
Name: SharedGW
Outputs:
apiGatewayRestApiId:
Value:
Ref: SharedGW
Export:
Name: SharedGW-restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- SharedGW
- RootResourceId
Export:
Name: SharedGW-rootResourceId
I pull everything together in a serverless.yml file that references the resource files like this:
...
resources:
# S3 Bucket
- ${file(resources/s3/s3-static-host.yml)}
- ${file(resources/s3/s3-CodeBuildResults.yml)}
# DynamoDB
- ${file(resources/dynamodb/dynamodb-mealtable.yml)}
- ${file(resources/dynamodb/dynamodb-ziptable.yml)}
- ${file(resources/dynamodb/dynamodb-usertable.yml)}
- ${file(resources/dynamodb/dynamodb-passwordresettable.yml)}
- ${file(resources/dynamodb/dynamodb-accountregistrationtable.yml)}
- ${file(resources/dynamodb/dynamodb-restaurant_table.yml)}
# DNS Records (Route 53)
# TODO: Determine why DNS hangs
# - ${file(resources/route_53/dev_dns.yml)}
# Gateways
- ${file(resources/api_gateway/local_rest_auth.yml)}
# - ${file(resources/api_gateway/rest_auth.yml)}
...
I've seen several examples of connecting a lambda to a gateway, but it's not clear where the gateway is being created), it's also not clear how the lambda is being created/if I'd be able to reference layers/function code in s3.
I've seen some tutorials for doing this with aws amplify via the cli, but my dream-state would be that I could effectively create a new aws account, deploy this serverless and have my site up and running automatically - with just a little route 53 work to point to a new domain.

How do I load values from a .json file into a Devops Yaml Pipeline Parameter

Microsoft Documentation explains the use of parameters in Yaml Pipeline jobs as
# File: azure-pipelines.yml
trigger:
- master
extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail
But instead of statically specifying the value of yesNo: I'd prefer to load it from a completely separate json config file. Preferably a json file that both my Build Job and my Application could share so that parameters specified for the Application could also be used in the Build Job.
Thus the question:
How do I load values from a .json file into a Devops Yaml Pipeline Parameter?
I've been using this marketplace task:
https://marketplace.visualstudio.com/items?itemName=OneLuckiDev.json2variable
And it's been working great so far. Haven't tried it builds, but can't see why it wouldn't work with separate build pipelines/multi-staged builds. There are a few things you have to be aware of/stumble upon, like double escaping slashes in directory paths - and you'll have to fetch secrets from someplace else, like traditional variable groups.

In a multi-repo deploy, how can I can I keep the pipelines co-located with their source code

I have a Deploy to IBM Cloud button that deploys 3 git repos and works great except I have a maintenance problem. If I make an edit in one of the repos which impacts how it is built, I have to change the pipeline.yml which exists in another repo, namely in the same repo as my .bluemix\toolchain.yml. I would prefer to have my pipeline.yml files self-contained in the repo they actually pertain to. My toolchain.yml has 3 entries like:
services:
dashboard-build:
service_id: pipeline
parameters:
services:
- dashboard-repo
name: 'dashboard-{{toolchain.name}}'
configuration:
content:
$ref: dashboard.pipeline.yml
$refType: text
I tried an absolute path like:
ref: https://raw.githubusercontent.com/org/repo/master/.bluemix/dashboard.pipeline.yml but it errored out with
repository contains an invalid template. File not found
Can I change the pipeline's location to be in its own repository or does it have to be co-located with the toolchain.yml?
Yes, as you've guessed any files referenced with $ref or $text must be co-located in either the same repo or zip file. We might offer support for referencing and extending another template in the future but there is nothing concrete there yet.
--
Also...
$text should be used in preference to $ref and $refType here.
The pipeline's "content" element expects raw text and historically that is why we added $refType: text. However, $ref as specified in JSON Reference explicitly ignores siblings so although we have support for $refType currently it would be better to just use $text going forward.
content:
$text: dashboard.pipeline.yml