I have an API, and a consumer web app, both written in Node and Express. The API is defined by a OpenAPI Specification. Implemented by swagger-ui-express.
The above web apps are Dockerised and managed in Kubernetes.
The API has a handful of endpoints for managing the lifecycle of a user's registration/application to the service.
Currently, when I need to cleardown completed/abandoned applications, or resubmit failed applications, I employ a periodically run cronjob to carry out a database query for the actions mentioned. The cronjob is defined by a Kubernetes config YAML file. This is quickly becoming unmanageable, and hard to maintain.
I am looking in to having a dedicated endpoint for each of the above tasks. Then a dedicated cronjob could periodically send a request to the API endpoint to carry out the complex task. This moves the business logic back in to the API, and avoids duplication within a cronjob hosted elsewhere. I am ultimately asking if this is a good approach or is there a better workflow documented somewhere I could implement?
My thinking is that I could add these new endpoints to the already-existing consumer API, but have the new (housekeeping/management) endpoints separated from the others.
To separate each (current) endpoint in to their respective resource, I am defining tags within the specification. Tags don't seem to be sufficient for the separation of these new "housekeeping" endpoints.
Looking through the SwaggerUI documentation I can see that I can define multiple definitions (via the urls property) to switch between. These definitions being powered by individual Specification documents. This looks like a very clean way of separating the consumer API from the admin API, is this best practice?
Any input would be appreciated on this as I am struggling to find much documentation on this kind of issue.
Related
What is the best way to trigger an argo workflow from an API request?
The API request is handled by a web server, how does the server submits the workflow to the argo server? Using the CLI? using a rest request? What is the best/recommended approach here?
There's no one "right way." But here are some of the options, so you can pick the one that makes the most sense for your application:
Use the Argo API
with an SDK (Java, Go, Python)
If your API is written in Java, Go, or Python, and if your interactions with Argo are more complex than simply submitting a Workflow (for example, if you're also listing Workflows and would like a nice representation of those objects), an Argo Workflows SDK might be a good choice. In my experience the SDKs have quirks and bugs, so I'd only dive in if you need a more full-featured client.
directly with some HTTP client
If your use case is very simple (like submitting a small Workflow with a WorkflowTemplate reference), I would recommend using a direct HTTP call to the Argo or Kubernetes API.
Use a webhook
The webhook endpoint is technically part of the API, but it's a bit different. The API is basically a specialized version of the Kubernetes API, tailored to the Argo CRDs. The events API endpoint provides some additional features specific to kicking off workflows.
Use the CLI
You'd have to fork the CLI process from your server code, so this probably isn't the "cleanest" approach.
Use Argo Events
Argo Events is a separate but closely-related project. It can accept a variety of inputs (webhooks, pub/sub messages, etc) and then trigger a Workflow.
Argo Events could make sense if, for example, you want an external record of all the workflows submitted. Pub/sub would give you that record.
Use the Kubernetes API or CLI
Workflows are just Kubernetes resources, so you can just submit them via Kubernetes mechanisms if you like. If your language has a robust Kubernetes SDK, that's a solid choice.
As I'm sure you can tell, it really depends on the application. Let me know if any of these needs clarification.
If I create a RESTful API named CustomerManagement with several operations in it like Create, Update, Retrieve customer etc.
Each operation is considered a business functionality. Backend is a monolith(Provides different SOAP interfaces(WSDLs) for the above operations).
So, as per microservices design principles, should we be creating an independently deployable image (independently versioned as well) for each operation or can the whole Rest API be bundled in a single image?
If I create a RESTful API named CustomerManagement with several operations in it like Create, Update, Retrieve customer etc
As per REST principles, APIs are designed as resources. A resource represents a domain entity. In your case, the customer is the domain entity. Hence, your REST API should be called 'customers'. The API will look something like /api/v1/customers. You can implement Http operations on this API as needed.
Now to answer your question, all operations belonging to an API should be part of the same application. Splitting it up into different apps or multiple deployments does not make sense - not only will it lead to code duplication, management of each deployment will be an overhead.
I suggest you read about REST principles. This article is good for beginners - https://developer.ibm.com/technologies/web-development/articles/ws-restful
I have an application API that is used In two scenarios:
My frontend application uses it to interact with the server
A client is using it for development of CLI tool so there is an open documentation of the API.
At start all of the endpoints were kind of generic so they have been used in both scenarios, but as my application grows i have a need to :
create special endpoints for my frontend application for optimization, for example an endpoint to some statistics screen
Change some of the basic API results structures that are not backward compatible and can break the Clients
usage.
What is the best practice to design an API to meet these needs?
How is should be design correctly so it will be adjusted
to the frontend needs and on the other side will be robust enough to not break the Client's applications?
frontend specific endpoints along with General ones?
What is the best practice to design an API to meet these needs?
This highly depends on your scenario. Is your API going to be used internally only or will it be made publicly available to an unknown number of developers and integrators? What is the expected lifetime of the API? Will it evolve?
How is should be design correctly so it will be adjusted to the frontend needs and on the other side will be robust enough to not break the Client's applications?
I recommend to commit to API contracts and use a specification for these contracts. I prefer the OpenAPI specification as it will come with a lof of benefits. Make sure you invest a lot of time and team effort (product owner, project managers, backend & frontend devs) to develop the contract in several iterations. After each iteration test the specification by mocking the API and clients before turning over to to implement your frontend app or cli client.
frontend specific endpoints along with General ones?
I would not do that, but I do not know you context. What does a frontend specific endpoint mean? If it means that as of today the endpoint should be only used by the frontend application but is of no use for the current cli client than I think it is just a matter of perspective. Make it a general endpoint and just use it by the frontend app. If it somewhat provides sensitive information that should be access only by the frontend you need to think about authentication and authorization. I recommend implementing Oauth2 for that.
create special endpoints for my frontend application for optimization, for example an endpoint to some statistics screenfrontend specific endpoints along with General ones?
I would suggest to implement all endpoints in your API and use OAuth2 as authentication. Use the scopes of the OAuth approach to manage authorization and access to different endpoints for each client (frontend app, cli).
You wrote you need to:
Change some of the basic API results structures that are not backward compatible and can break the Clients usage.
Try to avoid making breaking changes to your API. If it is used internally only you may be in control of the different clients accessing the API but even than the risk of breaking a client is high.
If you need to change existing behaviour you should think about API versioning or API evolution, which is a controversly discussed topic with a lot of different opinions and practices.
What is the best practice to design an API to meet these needs?
Design your resource representations so that they are forward and backwards compatible by design. Fundamentally, they are messages, so treat them that way; new optional fields with reasonable defaults can be added to the messages, but the semantics of a message element should never change.
If you dig through the old XML literature, you'll find references to ideas like Must Ignore and Must Forward -- those are the sorts of princples that also apply to the representations of long lived resources.
Create new resources when the existing resources cannot be conveniently extended to cover your new use case.
At work we're discussing how to structure our upcoming APIs. As of now, we're about to launch an API containing different user information endpoints, and though we'd publish it under an URI like this: api.mycompany.com/userinfo. Examples of endpoints:
api.mycompany.com/userinfo/users
api.mycompany.com/userinfo/users/{id}
api.mycompany.com/userinfo/api-docs <-- Swagger document for this particular API will be located here
This type of setup would allow us to have server1.mycompany.com host the API, and use our load balancer / proxy to forward traffic to api.mycompany.com/userinfo to server1.mycompany.com. For our next API running on server2.mycompany.com, we'll simply have our load balancer / proxy forward traffic from api.company.com/transportation to server2.mycompany.com, like this:
api.mycompany.com/transportation/cars
api.mycompany.com/transportation/cars/{id}
api.mycompany.com/transportation/api-docs <-- Swagger document for this particular API will be located here
By using "userinfo" and "transportation" in the URI, we'll have a simple way to reference our different APIs as a whole, and a simple way to publish the Swagger UI along side the actual API.
My concern with these URIs is that they're not hierarchical, but more like a way to group endpoints together. Nor is "userinfo" a resource, so compared to the REST API examples one typically comes across online, using elements such as "userinfo" and "transportation" in the path may not be according to best practices.
Does this design break any REST API design patterns? If so, how would you suggest us publishing our different APIs under a single fqdn (api.mycompany.com)? Or are there reasons not to use a single fqdn for all of our APIs?
Any input will be greatly appreciated.
REST doesn't care what spellings you use for your URI
My concern with these URIs is that they're not hierarchical, but more like a way to group endpoints together. Nor is "userinfo" a resource
Identifiers being "hierarchical" doesn't (necessarily) promise anything about a hierarchy of resources. The fact that there is a resource identified by /userinfo/users does not imply that there is also a resource identified by /userinfo. Think Key/Value store, not File System.
A Rails developer might recognize /userinfo and /transportation as namespaces.
If so, how would you suggest us publishing our different APIs under a single fqdn (api.mycompany.com)? Or are there reasons not to use a single fqdn for all of our APIs?
In a 2014 interview, Fielding offered this answer about versioning:
It is always possible for some unexpected reason to come along that requires a completely different API, especially when the semantics of the interface change or security issues require the abandonment of previously deployed software. My point was that there is no need to anticipate such world-breaking changes with a version ID. We have the hostname for that. What you are creating is not a new version of the API, but a new system with a new brand.
If you squint at that, it might imply that different API should be on different (logical) hosts.
Does this design break any REST API design patterns?
Nope. There are no "REST API design patterns". And REST doesn't say anything about what URLs should look like. REST says to treat them as opaque. There's an argument that web API URLs should be "hackable", that is, easily understandable and modifiable by a human. I'd argue that your URL structure is hackable. I'm not aware of any persuasive argument that URLs must be hierarchical in nature.
In an HTTP-driven microservices architecture, each service might have a number of public endpoints that return JSON, for example, to a client or an API gateway intermediary. These services could also accept POSTs with JSON bodies of a certain shape, or query strings of a certain shape, etc.
What are some good options for documenting or programmatically keeping track of these "contracts" between services? I.e, if service A's /getThing endpoint has been refactored to return different data, is there a documentation tool or methodology that would facilitate updating the API gateway to adapt to this change?
For programmatically management of contracts, if you using spring-cloud stack then you must look into spring-cloud-contract, by which you can easily keep track of your latest version of contracts for your Rest endpoints and also if any change occurs in your api endpoint, this will help you notify by breaking the contract and failing the test-cases build around it.
Let's say for example, service A's /getThing endpoint has been refactored to return different data then all calling services to this endpoint will fail while build time of your project.
However, this methodology won't facilitate updating the API gateway to adapt to this change as there might different logic you want to perform of every new version of your endpoints.
You can also create Rest Docs snippets using these endpoint contracts. checkout Rest Docs snippets. You can also use swagger for documenting your endpoints.
for NodeJs check here.