Service versioning in OpenAPI - openapi

We are about to implement a service API for my client that consists of a number of services, lets say ServiceA, ServiceB and ServiceC. Each service can over time (independantly) introduce new versions, while the old versions still exist. So we might have:
ServiceA v1
ServiceA v2
ServiceB v1
ServiceC v1
ServiceC v2
ServiceC v3
We are required to document this API using OpenAPI. I'm not too familiar with OpenAPI, but as far as I can see you typically version the entire API, not separate services.
How would one typically document such versioning using OpenAPI? Personally I see two options, but I am very likely missing something:
Add each version of the same service as separate services in the documentation (but that causes a bloated API over time with a lot of services)
Increase all the services versions and the entire API's version everytime a single service changes version so there's always a version 1, 2 and 3 of each service, even if some of them are identical (but that introduces a lot of unneccesary service versions).
Any input would be much appreciated.

Maybe a little late, but consider this point of view.
If the service API is implemented by a compact component, then the API should also be compact. The consumer of your service is not interested into your versioning policies more than to the extend of backwards compatibility violations and/or new feature addition.
It is absolutely out of scope to require the consumer to know all your version numbers for individual services. He does not want to know it, he wants a complete, compact package.
If you services have little in common, your best solution may be individual OpenAPI documents with tailored versions. The point is - you will not block development of your components by unrelated customers/consumers relying on the remaining services, and you will not bother the customers with zero-change version changes for their field of interest.
On the other hand, you may as well keep a version number that is separate from the endpoints.
Something like this:
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com/v2
paths:
/some-action:
...
instead of
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com
paths:
/v2/some-action:
...
Now, your clients are expected to configure the address of your service http://somewhere.com/v2 and use it for all services, A, B, C. Once you decide to release a new major version /v3 which breaks compatibility on service A, and brings new features to B, C, and possibly new D:
You may keep http://somewhere.com/v2 running with old service component version for existing customers.
The API may inform them in some way about using deprecated endpoint
You may keep both API specs published in your dev documentation tool.
Customers only using A, B may just swap http://somewhere.com/v2 for http://somewhere.com/v3 on a resource URI level and not change/rebuild any code.
Customers using A may keep using old version, and design a slow shift towards /v3 by addressing ONLY code working with A
New customers may start off by using current LIVE version, having a compact API. You definitely do not want to force them to learn which is the current latest version for each service separately, they want to encapsulate the versioning inside server as well. Also, they are free to use D.

Related

Differences in designing a serverless function vs. regular server

I am wondering what approach in designing serverless functions to take, while taking designing a regular server as a point of reference.
With a traditional server, one would focus on defining collections and then CRUD operations one can run on each of them (HTTP verbs such as GET or POST).
For example, you would have a collection of users, and you can get all records via app.get('/users', ...), get specific one via app.get('/users/{id}', ...) or create one via app.post('/users', ...).
How differently would you approach designing a serverless function? Specifically:
Is there a sense in differentiating between HTTP operations or would you just go with POST? I find it useful to have them defined on the client side, to decide if I want to retry in case of an error (if the operation is idempotent, it will be safe to retry etc.), but it does not seem to matter in the back-end.
Naming. I assume you would use something like getAllUsers() when with a regular server you would define collection of users and then just use GET to specify what you want to do with it.
Size of functions: if you need to do a number of things in the back-end in one step. Would you define a number of small functions, such as lookupUser(), endTrialForUser() (fired if user we got from lookupUser() has been on trial longer than 7 days) etc. and then run them one after another from the client (deciding if trial should be ended on the client - seems quite unsafe), or would you just create a getUser() and then handle all the logic there?
Routing. In serverless functions, we can't really do anything like .../users/${id}/accountData. How would you go around fetching nested data? Would you just return a complete JSON every time?
I have been looking for some comprehensive articles on the matter but no luck. Any suggestions?
This is a very broad question that you've asked. Let me try answering it point by point.
Firstly, the approach that you're talking about here is the Serverless API project approach. You can clone their sample project to get a better understanding of how you can build REST apis for performing CRUD operations. Start by installing the SAM cli and then run the following commands.
$ sam init
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice: 1
Cloning from https://github.com/aws/aws-sam-cli-app-templates
Choose an AWS Quick Start application template
1 - Hello World Example
2 - Multi-step workflow
3 - Serverless API
4 - Scheduled task
5 - Standalone function
6 - Data processing
7 - Infrastructure event management
8 - Machine Learning
Template: 3
Which runtime would you like to use?
1 - dotnetcore3.1
2 - nodejs14.x
3 - nodejs12.x
4 - python3.9
5 - python3.8
Runtime: 2
Based on your selections, the only Package type available is Zip.
We will proceed to selecting the Package type as Zip.
Based on your selections, the only dependency manager available is npm.
We will proceed copying the template using npm.
Project name [sam-app]: sample-app
-----------------------
Generating application:
-----------------------
Name: sample-app
Runtime: nodejs14.x
Architectures: x86_64
Dependency Manager: npm
Application Template: quick-start-web
Output Directory: .
Next steps can be found in the README file at ./sample-app/README.md
Commands you can use next
=========================
[*] Create pipeline: cd sample-app && sam pipeline init --bootstrap
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
Comings to your questions point wise:
Yes, you should differentiate your HTTP operations with their suitable HTTP verbs. This can be configured at the API Gateway and can be checked for in the Lambda code. Check the source code of handlers & the template.yml file from the project you've just cloned with SAM.
// src/handlers/get-by-id.js
if (event.httpMethod !== 'GET') {
throw new Error(`getMethod only accepts GET method, you tried: ${event.httpMethod}`);
}
# template.yml
Events:
Api:
Type: Api
Properties:
Path: /{id}
Method: GET
The naming is totally up to the developer. You can follow the same approach that you're following with your regular server project.
You can define the handler with name getAllUsers or users and then set the path of that resource to GET /users in the AWS API Gateway. You can choose the HTTP verbs of your desire. Check this tutorial out for better understanding.
Again this up to you. You can create a single Lambda that handles all that logic or create individual Lambdas that are triggered one after another by the client based on the response from the previous API. I would say, create a single Lambda and just return the cumulative response to reduce the number of requests. But again, this totally depends on the UI integration. If your screens demand separate API calls, then please, by all means create individual lambdas.
This is not true. We can have dynamic routes specified in the API Gateway.
You can easily set wildcards in your routes by using {variableName} while setting the routes in API Gateway.
GET /users/{userId}
The userId will then be available at your disposal in the lambda function via event.pathParameters.
GET /users/{userId}?a=x
Similarly, you could even pass query strings and access them via event.queryStringParameters in code. Have a look at working with routes.
Tutorial I would recommend for you:
Tutorial: Build a CRUD API with Lambda and DynamoDB

Is it possible to have a single frontend select between backends (defined dynamically)?

I am currently looking into deploying Traefik/Træfik on our service fabric cluster.
Basically I have a setup where I have any number of Applications (services), defined with a tenant name and each of these services is in fact a separate Web UI.
I am trying to figure out if I can configure a single frontend to target a backend so I don't have to define a new frontend each time I deploy a new UI app. Something like
[frontend.tenantui]
rule = "HostRegexp:localhost,{tenantName:[a-z]+}.example.com"
backend = "fabric:/WebApp/{tenantName}"
The idea is to have it such that I can just deploy new UI services without updating the frontend configuration.
I am currently using the Service Fabric provider for my backend services, but I am open to using the file provider or something else if that is required.
Update:
The servicemanifset contains labels, so as to let traefik create backends and frontends.
The labels are defined for one service, lets call it WebUI as an example. Now when I deploy an instance of WebUI it gets a label and traefik understands it.
Then I deploy ANOTHER instance with a DIFFERENT set of parameters, its still the WebUI service and it uses the same manifest, so it gets the same labels, and the same routing. But what I would really want was to let it have a label containing some sort of rule so I could route to the name of the service instance (determine at runtime not design time). Specifically I would like for the runtime part to be part of the domainname (thus the suggestion of a HostRegexp style rule)
I don't think it is possible to use the matched group from the HostRegexp to determine the backend.
A possibility would be to use the Property Manager API to dynamically set the frontend rule for the service instance after creating it. Also, see this for a complete example on using the API.

How to configure localized URLs in kubernetes nginx ingress controller API object

I have a cluster in Azure AKS with 1 node.
On that cluster I have two back-end services.
Each back-end service is a web app.
I have a domain mydomain.com.
Each app will need to be configured with its own path rule in the ingress object.
Web app 1s (let's call this one the homepage app) target URL needs to be either of the following:
US version of the site: mydomain.com
Swedish version of the site: mydomain.com/se/sv-sv/hem
Any other location/language version of the site: mydomain.com/xx/yy-xx/abcdefgh
Web app 2s (let's call this one the whitepony app) target URL needs to be either of the following:
US version of the site: mydomain.com/us/en-us/whitepony
Swedish version of the site: mydomain.com/se/sv-sv/whitepony
Any other location/language version of the site: mydomain.com/xx/yy-xx/whitepony
(The whitepony apps target path segment is called whitepony regardless of location/language)
Now to my question.
How can I configure these rules in an ingress API object?
Can I use prefixes in the path rules?
Or do I need to use regular expressions?
And what about the special case of the US version of the homepage app, where I'm not using any prefixes/extra URL segments?
Can I use conditions in the ingress object?
Or how would you configure the ingress resource object to meet all the above requirements?
Note that I know and have successfully configured multiple back-end services using path rules in an ingress object.
But without prefixes or extra URL segments.
I won't give you fully working example on how to specify rules in ingress resource to meet your requirements, I would rather like to share with you some hints:
Yes, you will need regular expressions to achieve it, and here is the example of doing it directly with NGINX directives based on example of wordpress multi-language site.
You don't need to define these re-write rules with annotations, you can use for that pure NGINX config style, by supplying appropriate inline NGINX config file inside ConfigMap, here is the example on how to achieve this.
I hope this will help you

asynchronousRulesetParsing XU property and the Business Rules service on Bluemix

My ruleset is deployed on the Business Rules service on Bluemix, and I want to execute an older version of a ruleset while the newer one is being parsed. To do so, I am trying to configure the XU property asynchronousRulesetParsing, but I cannot figure out how to do so.
XU properties cannot be configured for the Business Rules service on Bluemix.
Specifically regarding the asynchronousRulesetParsing XU property, I found that it is not applicable when using Hosted Transparent Decision Services (HTDS) in ODM, since the implementation of HTDS always forces the latest version of rulesets to be used.
Since the Business Rules service on Bluemix uses HTDS, the asynchronousRulesetParsing XU property is also not applicable.
Instead, once I deploy a new ruleset version, I send a "dummy" request to force the parsing of the new ruleset version and incur the parsing delay. I wait for this request to complete before running the "real" requests of the ruleset.

Upgrading to Spring cloud 1.0.1 zuul url encoded parameters not working

We use zuul as an gateway to dispatch incoming requests to services.
When we upgraded from 1.0.0, we noticed two issues, one of which we got a workaround to.
The second issue is that in some of the incoming have encoded uris to deal with special characters in the request, e.g. ....rovi//45846 which needs to be changed to rovi%2F%2F45846 in order to pass in.
So for a rest uri like the following POST http://localhost:8902/contentservice/content/subscriptionPackages/624460160/channels/rovi%252F%252F45846
If I make this request directly to the service, it works correctly.
But if I route it through zuul as POST http://localhost:8765/contentservice/content/subscriptionPackages/624460160/channels/rovi%252F%252F45846 then it disappears.
Now if I take the % out it is passed in and treated as an error in the contentservice when I step through the content service front end controller (off course).
What has changed between spring cloud 1.0.0 to 1.0.1 in the zuul functionality to stop this from working. As it definitely was working in 1.0.0.
So the spring cloud team has fixed this in the snapshot releases and you can fixed more detail here
https://github.com/spring-cloud/spring-cloud-netflix/issues/366#issuecomment-106363315