Differences in designing a serverless function vs. regular server - rest

I am wondering what approach in designing serverless functions to take, while taking designing a regular server as a point of reference.
With a traditional server, one would focus on defining collections and then CRUD operations one can run on each of them (HTTP verbs such as GET or POST).
For example, you would have a collection of users, and you can get all records via app.get('/users', ...), get specific one via app.get('/users/{id}', ...) or create one via app.post('/users', ...).
How differently would you approach designing a serverless function? Specifically:
Is there a sense in differentiating between HTTP operations or would you just go with POST? I find it useful to have them defined on the client side, to decide if I want to retry in case of an error (if the operation is idempotent, it will be safe to retry etc.), but it does not seem to matter in the back-end.
Naming. I assume you would use something like getAllUsers() when with a regular server you would define collection of users and then just use GET to specify what you want to do with it.
Size of functions: if you need to do a number of things in the back-end in one step. Would you define a number of small functions, such as lookupUser(), endTrialForUser() (fired if user we got from lookupUser() has been on trial longer than 7 days) etc. and then run them one after another from the client (deciding if trial should be ended on the client - seems quite unsafe), or would you just create a getUser() and then handle all the logic there?
Routing. In serverless functions, we can't really do anything like .../users/${id}/accountData. How would you go around fetching nested data? Would you just return a complete JSON every time?
I have been looking for some comprehensive articles on the matter but no luck. Any suggestions?

This is a very broad question that you've asked. Let me try answering it point by point.
Firstly, the approach that you're talking about here is the Serverless API project approach. You can clone their sample project to get a better understanding of how you can build REST apis for performing CRUD operations. Start by installing the SAM cli and then run the following commands.
$ sam init
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice: 1
Cloning from https://github.com/aws/aws-sam-cli-app-templates
Choose an AWS Quick Start application template
1 - Hello World Example
2 - Multi-step workflow
3 - Serverless API
4 - Scheduled task
5 - Standalone function
6 - Data processing
7 - Infrastructure event management
8 - Machine Learning
Template: 3
Which runtime would you like to use?
1 - dotnetcore3.1
2 - nodejs14.x
3 - nodejs12.x
4 - python3.9
5 - python3.8
Runtime: 2
Based on your selections, the only Package type available is Zip.
We will proceed to selecting the Package type as Zip.
Based on your selections, the only dependency manager available is npm.
We will proceed copying the template using npm.
Project name [sam-app]: sample-app
-----------------------
Generating application:
-----------------------
Name: sample-app
Runtime: nodejs14.x
Architectures: x86_64
Dependency Manager: npm
Application Template: quick-start-web
Output Directory: .
Next steps can be found in the README file at ./sample-app/README.md
Commands you can use next
=========================
[*] Create pipeline: cd sample-app && sam pipeline init --bootstrap
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
Comings to your questions point wise:
Yes, you should differentiate your HTTP operations with their suitable HTTP verbs. This can be configured at the API Gateway and can be checked for in the Lambda code. Check the source code of handlers & the template.yml file from the project you've just cloned with SAM.
// src/handlers/get-by-id.js
if (event.httpMethod !== 'GET') {
throw new Error(`getMethod only accepts GET method, you tried: ${event.httpMethod}`);
}
# template.yml
Events:
Api:
Type: Api
Properties:
Path: /{id}
Method: GET
The naming is totally up to the developer. You can follow the same approach that you're following with your regular server project.
You can define the handler with name getAllUsers or users and then set the path of that resource to GET /users in the AWS API Gateway. You can choose the HTTP verbs of your desire. Check this tutorial out for better understanding.
Again this up to you. You can create a single Lambda that handles all that logic or create individual Lambdas that are triggered one after another by the client based on the response from the previous API. I would say, create a single Lambda and just return the cumulative response to reduce the number of requests. But again, this totally depends on the UI integration. If your screens demand separate API calls, then please, by all means create individual lambdas.
This is not true. We can have dynamic routes specified in the API Gateway.
You can easily set wildcards in your routes by using {variableName} while setting the routes in API Gateway.
GET /users/{userId}
The userId will then be available at your disposal in the lambda function via event.pathParameters.
GET /users/{userId}?a=x
Similarly, you could even pass query strings and access them via event.queryStringParameters in code. Have a look at working with routes.
Tutorial I would recommend for you:
Tutorial: Build a CRUD API with Lambda and DynamoDB

Related

Have a fallback for ImportValue on a CloudFormation template?

I've got two projects (backoffice and frontoffice) deployed using CloudFormation.
In the frontoffice, I import some DynamoDB table names from the backoffice stack as Environment variables for my Lambdas.
To run some acceptance tests I need to sometimes deploy the frontoffice withtout deploying the backoffice. Therefore, the frontoffice will try to do an ImportValue of an Export that doesn't exists.
Is there any pattern that would allow me to get the Frontend deployed anyway - and then handle the lack of value in my code ?
You could pass an additional Parameter to frontoffice indicating whether you are going to deploy with backoffice or not.
Based on the value of the parameter, you could use DependsOn and/or Fn::If to either import or not the DynamoDB table names.
For a fully automated solution without any extra Parameter, you would have to use custom resource. The resource would be a lambda function, which would use AWS SDK to query CloudFormation stacks and check for backoffice.

Service versioning in OpenAPI

We are about to implement a service API for my client that consists of a number of services, lets say ServiceA, ServiceB and ServiceC. Each service can over time (independantly) introduce new versions, while the old versions still exist. So we might have:
ServiceA v1
ServiceA v2
ServiceB v1
ServiceC v1
ServiceC v2
ServiceC v3
We are required to document this API using OpenAPI. I'm not too familiar with OpenAPI, but as far as I can see you typically version the entire API, not separate services.
How would one typically document such versioning using OpenAPI? Personally I see two options, but I am very likely missing something:
Add each version of the same service as separate services in the documentation (but that causes a bloated API over time with a lot of services)
Increase all the services versions and the entire API's version everytime a single service changes version so there's always a version 1, 2 and 3 of each service, even if some of them are identical (but that introduces a lot of unneccesary service versions).
Any input would be much appreciated.
Maybe a little late, but consider this point of view.
If the service API is implemented by a compact component, then the API should also be compact. The consumer of your service is not interested into your versioning policies more than to the extend of backwards compatibility violations and/or new feature addition.
It is absolutely out of scope to require the consumer to know all your version numbers for individual services. He does not want to know it, he wants a complete, compact package.
If you services have little in common, your best solution may be individual OpenAPI documents with tailored versions. The point is - you will not block development of your components by unrelated customers/consumers relying on the remaining services, and you will not bother the customers with zero-change version changes for their field of interest.
On the other hand, you may as well keep a version number that is separate from the endpoints.
Something like this:
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com/v2
paths:
/some-action:
...
instead of
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com
paths:
/v2/some-action:
...
Now, your clients are expected to configure the address of your service http://somewhere.com/v2 and use it for all services, A, B, C. Once you decide to release a new major version /v3 which breaks compatibility on service A, and brings new features to B, C, and possibly new D:
You may keep http://somewhere.com/v2 running with old service component version for existing customers.
The API may inform them in some way about using deprecated endpoint
You may keep both API specs published in your dev documentation tool.
Customers only using A, B may just swap http://somewhere.com/v2 for http://somewhere.com/v3 on a resource URI level and not change/rebuild any code.
Customers using A may keep using old version, and design a slow shift towards /v3 by addressing ONLY code working with A
New customers may start off by using current LIVE version, having a compact API. You definitely do not want to force them to learn which is the current latest version for each service separately, they want to encapsulate the versioning inside server as well. Also, they are free to use D.

how to write e2e test automation for application containing kafka, postgres, and rest api docker containers

I have an app which is setup by docker-compose. The app contains docker containers for kafka, postgres, rest api endpoints.
One test case is to post data to endpoints. In the data, there is a field called callback URL. the app will parse the data and send the data to the callback URL.
I am curious whether there is any test framework for similar test cases. and how to verify the callback URL is hit with data?
Docker compose support has been added to endly. In the pipeline workflow for the app (app.yaml), you can add a "deploy" task and start the docker services by invoking docker-compose up.
Once test task is completed and your callback url is invoked, in your validation task, you can check to see if it was invoked with the expected data. For this you can utilize endly's recording feature and replay it to validate the callback request.
Below is an example of an ETL application app.yaml using docker-compose with endly to start the docker services. Hope it helps.
tasks: $tasks
defaults:
app: $app
version: $version
sdk: $sdk
useRegistry: false
pipeline:
build:
name: Build GBQ ETL
description: Using a endly shared workflow to build
workflow: app/docker/build
origin:
URL: ./../../
credentials: localhost
buildPath: /tmp/go/src/etl/app
secrets:
github: git
commands:
- apt-get -y install git
- export GOPATH=/tmp/go
- export GIT_TERMINAL_PROMPT=1
- cd $buildPath
- go get -u .
- $output:/Username/? ${github.username}
- $output:/Password/? ${github.password}
- export CGO_ENABLED=0
- go build -o $app
- chmod +x $app
download:
/$buildPath/${app}: $releasePath
/$buildPath/startup.sh: $releasePath
/$buildPath/docker-entrypoint.sh: $releasePath
/$buildPath/VERSION: $releasePath
/$buildPath/docker-compose.yaml: $releasePath
deploy:
start:
action: docker:composeUp
target: $target
source:
URL: ${releasePath}docker-compose.yaml
In your question below, where is Kafka involved? Both sound like HTTP calls.
1)Post data to endpoint
2)Against send data to the callback URL
One test case is to post data to endpoints. In the data, there is a field called callback URL. the app will parse the data and send the data to the callback URL.
Assuming the callback URL is an HTTP endpoint(e.g. REST or SOAP) with POST/PUT api, then it's better to expose a GET endpoint on the same resource. In that case, when callback POST/PUT is invoked, the server side state/data changes and next, use the GET api to verify the data is correct. The output of the GET API is the Kafka data which was sent to the callback URL(this assumes your 1st post message was to a kafka topic).
You can achieve this using traditional JUnit way using bit of code or via declarative way where you can completely bypass coding.
The example has dockerized Kafka containers to bring up locally and run the tests
This section Kafka with REST APIs explains automated way of testing combination of REST api testing with Kafka data streams .
e.g.
---
scenarioName: Kafka and REST api validation example
steps:
- name: produce_to_kafka
url: kafka-topic:people-address
operation: PRODUCE
request:
recordType: JSON
records:
- key: id-lon-123
value:
id: id-lon-123
postCode: UK-BA9
verify:
status: Ok
recordMetadata: "$NOT.NULL"
- name: verify_updated_address
url: "/api/v1/addresses/${$.produce_to_kafka.request.records[0].value.id}"
operation: GET
request:
headers:
X-GOVT-API-KEY: top-key-only-known-to-secu-cleared
verify:
status: 200
value:
id: "${$.produce_to_kafka.request.records[0].value.id}"
postCode: "${$.produce_to_kafka.request.records[0].value.postcode}"
Idaithalam is a low code Test automation Framework, developed using Java and Cucumber. It leverages Behavior Driven Development (BDD). Tester can create test cases/scripts in simple Excel with API Spec. Excel is a simplified way to create Json based test scripts in Idaithalam. Test cases can be created quickly and tested in minutes.
As a tester, you need to create Excel and pass it to Idaithalam Framework.
First, generate the Json based test scripts(Virtualan Collection) from Excel. During test execution, this test script collection can be directly utilized.
Then it generates Feature files from the Virtualan Collection and its executed.
Lastly, It generates test report in BDD/Cucumber style.
This provide complete testing support for REST APIs, GraphQL, RDBMS DB and Kafka Event messages
Refer following link for more information to set up and execute.
https://tutorials.virtualan.io/#/Excel
How to create test scripts using excel

Robotframework - workflow design

At the moment I'm using Taskflow to specify my test workflow. I'm trying to understand if Robotframework can be used for my tests scenario.
For example, my typical test is:
- Start traffic on device1
- While traffic is flowing:
- Collect via SSH realtime traffic data on device2
- Collect via SSH realtime traffic data on device3
- Stop traffic on a device1
- Get output data from device2 and device3
- Check outputs
I did not find any workflow detail for Robotframework. Is it possible to design such a test in RF?
Riccardo
I believe it can be used for your scenario.
Robot Framework uses external libraries such as SSHLibrary
Here is a documentation to said library with description of concepts, keywords you can use and with examples.
A lot of things are generally possible with Robot Framework as you can always expand it's capabilities by writing your own external libraries if the commonly used ones are not matching your needs.
But it seems that this library might do exactly what you need.
You can open several connections
You can start/execute command
You can log to file or read output
...

Retrieving Project Metrics using Rest API and Timemachine from SonarQube

I'm trying to retrieve projects metrics using the REST Api. Therefore I first query the projects using "/api/projects/index". Afterwards I retrieve the metrics using "/api/metrics/search". Both works fine. And I result with:
[id:35476, k:com.test:TestProject, nm:TestProject, qu:TRK, sc:PRJ]
[custom:false, description:Cyclomatic complexity, direction:-1, domain:Complexity, hidden:false, id:10019, key:complexity, name:Complexity, qualitative:false, type:INT]
Now I wanted to retrieve a projects metrics. Therefore I use the following URL:
https://MYHOST/sonarqube/api/timemachine/index?resource=35476&metric=10019&fromDateTime=2010-12-25T23:59:59+0100&toDateTime=2018-12-25T23:59:59+0100
There the server retruns only: [{"cols":[],"cells":[]}]
This surprices me, because when I enter the WebInterface of sonar for the project, I can see numbers. I tried some other metrics, however all ended with the same result. What am I doing wrong?
You didn't mention server version, so I'll assume the latest: 5.2.
I got the same result for a bare query (http://nemo.sonarqube.org/api/timemachine/index), and for a query which specified resource but not metrics (http://nemo.sonarqube.org/api/timemachine/index?resource=org.sonarsource.sonarqube%3Asonarqube).
So I'm guessing there's a problem with either your resource or metric id. Try using the keys (com.test&%3ATestProject, and complexity) instead.
And yes, the ids you got back from the other web services should work here, but what's meant by "id" can be a little... ah... variable from service to service to service.