Public valid REST Api with wolkenkit.io - rest

I am currently evaluating the framework "wolkenkit" [1] for using it in an application. Within this application I will have a user interface for tenant-based data management. Only authenticated users will have access to this application.
Additionally there should be a public REST API following common standards and being callable by public (tenant security done with submission of a tenant-based API Key within the request headers).
As far as I have found out, the wolkenkit REST API does not seem to fit these standards in forms of HTTP verbs.
But as wolkenkit at all appears to me as a really flexible and easy-to-use framework, I wonder how to basically implement such a public API.
May it be e.g. a valid approach to create an own web application which internally connects to the wolkenkit backend? What about the additional performance overhead then?
[1] https://www.wolkenkit.io/

In addition to the answer of mattwagl, I would like to point out a few things that you may be interested in.
First of all, since wolkenkit is based on CQRS, the application has a separate API for writing and reading. That means, that if you send a command (whose intent is to change state) this goes to the write API. If you subscribe for events or run a query, this goes to the read API.
This again means, that if you send a command, it's up to the write side to respond to it. As the write side is not meant to return application state, all it says is basically: "Thanks, I have received the command." To get the actual result you have to wait for the appropriate event, which means subscribing to the read API.
In the wolkenkit documentation there is a nice diagram which shows this in a clear way:
If you now add a separate REST API (which actually fulfills the requirements of REST), this means that you need to handle waiting for the result internally. In other words: Clients in wolkenkit are always meant to be asynchronous, REST is not. Hence it's your job to handle the asynchronous behavior of the wolkenkit APIs in your REST API. I think that this is the hardest part.
Once you have done this, you will have a synchronous REST API, and of course it will have some overhead. But I think that since its overhead is limited to passing through and translating network requests, it should be negligible.
Oh, and finally, there is another thing that you have to watch out for: Since REST as it was meant originally relies on the HTTP verbs to transport semantics, you need to map GET / POST / PUT / DELETE to the semantic commands of wolkenkit. As long as this can be done 1:1, everything's fine – problems start when there are multiple commands that (technically speaking) do an UPDATE.
PS: I'm also one of the developers of wolkenkit.
PPS: However you are going to solve this, I would be highly interested to hear from you! It would be very great if you could share your experiences with us, as you are most probably not the last one with this idea. If you want to contact us, the easiest way would be via Slack.

wolkenkit applications can be accessed using an HTTP- and a Websocket-API. These APIs are both provided by the tailwind module that wolkenkit uses under the hood. In the tailwind repo you can find a very simple documentation of the available HTTP routes.
You're right, the wolkenkit HTTP-API is not a classic REST-API. It's more RPC-style which in our experience is a good fit for applications. There are only 3 routes that your clients/tenants need to support: /v1/command (POST) is used for issuing commands. The commands you post should follow the command schema. /v1/events (POST) can be used for streaming events to clients. These events will follow the event schema. Finally you have /v1/read/:modelType/:modelName (POST) to read models. You can simply use HTTPie to test these routes.
Authentication of these APIs is currently done using OpenID-Connect. There's a very detailed article on how to setup authentication using Auth0. I'm not quite sure if this fits your use-case but you could basically use any Authentication Service that follows this standard or that is able to issue JWT tokens.
Finally you could also build your own JavaScript client-SDK that runs inside browsers by building a module that uses the wolkenkit-client-js under the hood. This SDK can just use the same API as any other client to connect to your application.
Hope this helps.
PS: Please note that I am one of the authors of wolkenkit.

Related

Difference between Swagger & HATEOAS

Can anyone explain difference between Swagger & HATEOAS. I can Search many time but no buddy can explain the proper detailed answer this two aspects.
The main difference between Swagger and HATEOAS IMO, which is not covered in the accepted answer, is, that Swagger is only needed for RPC'esque APIs. Such APIs, however, have actually hardly anything to do with REST.
There is a further, widespread misconception that anything exchanged via HTTP is automatically RESTful (~ in accordance with the REST archtitectural style), which it is not. REST just defines a set of constraints that are not choices or options but are mandatory. From start to finish. There is nothing wrong from being not RESTful, but it is wrong to term such an architecture REST.
Swagger describe the operations that can be performed on an endpoint and the payload (including headers and the expected representation formats) that needs to be sent to the service and also describe what a client might expect as response. This allows Swagger to be used both as documentation as well as testing-framework for the API. Due to the tight coupling of Swagger to the API it behaves much like a typical RPC service description, i.e. similar to WSDL files in SOAP or stub or skeletton classes in RMI or CORBA. If either the endpoint changes or something in the payload changes, clients implementing against a Swagger documentation will probably break over time just reintroducing the same problems typical RPC implementations have.
REST and HATEOAS, on the other side, are designed for disovery and further development. REST isn't a protocol but an architectural style to start with that describes the interaction flow between a client and server in a distributed system. It basically took the concepts which made the Web so successful and translated it onto the application layer. So the same concepts that apply to the browsable Web also apply to REST. Therefore it is no miracle that also HATEOAS (the usage of and support for links, link relations and link names) behave similar to the Web.
On designing a REST architecture it is benefitial to think of a state machine where a server provides all of the information a client needs to take further actions. Asbjørn Ulsberg held a great talk back in 2016 where he explains affordances and how a state machine might be implemented through HATEOAS. Besides common or standardized media-types and relation names no out-of-band knowledge is necessary to interact with the service further. In the case of the toaster example Asbjørn gave in his talk, a toaster may have the states off, on, heating and idle where turning a toaster on will lead to a state transition from off to on followed by a transition to heating till a certain temperature is reached where the state is transitioned to idle and switches between idle and heating till the toaster is turned off.
HATOAS will provide a client with the information on the current state and include links a client can invoke to transition to the next state, i.e. turning the toaster off again. It's important to stress here, that a client is provided by the server with every action the client might perform next. There is no need for a client implementor to consult any proprietary API documentation in order for a client to be able to interact with a REST service. Further, URIs do not have to be meaningful or designed to convey a semantical-expressive structure as clients will determine whether invoking that URI makes sense via the link-relation name. Such relation names are either specified by IANA, by a common approach such as Dublin Core or schema.org or by absolut URIs acting as extension attributes which might point to a human-readable description, which further might be propagated to the user via mouse-over tooltips or such.
I hope you can see by yourself that Swagger is only needed to describe RPC Web-APIs rather than applications that follow the REST architectural design. Messages exchanged via REST APIs should include all the information needed by a client to make informed choices on the next state transition. As such it is benefitial to design such message flows and interactions as state machine.
Update:
How are Swagger and HATEOAS mutually exclusive? The former documents your endpoints (making auto-generating code possible) and the latter adds meta-information to your endpoints which tell the consumer what they can do (i.e. which other endpoints are available). These are very different things.
I never stated that they are mutually exclusive, just that they serve two different purposes, where if you follow one approach the other gets more or less useless. Using both does not make any sense though.
Let's move the discussion to the Web domain as this is probably more easily understandable and REST is de facto just a generalization of the concepts used on the Web, so doing this step is just natural and also a good recommendation in terms of designing REST architectures in general. Think of a case where you as a user want to send some data to the server. You have never used the service before so you basically don't know how a request has to look like.
In Swagger you would call the endpoint documentation, select the option that most likely might solve your task, read up on how the request needs to look like and hack a test-case into your application that ends up generating a HTTP request that is sent to the respective location. Auto-generating code might spare you some hacking time, though you still need to integrate the stub classes into your application and test the whole thing at least once just to be safe. If you later on need to integrate a second service of that API or of yet an other API in general, you need to start from the beginning and look up the Swagger documentation, generate or hack the interaction code and integrate it into your domain. Plenty of manual steps involved and in cases of API changes you need to update your client as otherwise it might stop working.
In the Web example however, you just start your browser/Web client, invoke the respective URI that allows you to send the data to the server and the server will most likely send you a HTML form you just need to fill out and click the send button which automatically sends the request to the server which will start to process it. This is HATEOAS. You used the given controls to drive your workflow. The server taught your client every little detail it needed to make a valid request. It served your client with the target URI to send the request to, the HTTP method it should use and most often also implicitly the media type the payload should be in. In addition to that it also gave your clients a skeleton of the expected and/or supported elements the payload should contain. I.e. the form may require you to fill out a couple of input fields, select among a given set of choices or use some other controls such as a date or time picker value that is translated to a valid date or time representation for you. All you needed to do was to invoke the respective resource in your Web client. No auto-generation, no integration into your browser/application. Using other services (from the same or different providers) will, most likely, just work the same way so no need to change or update your HTTP client (browser) as long as the media-type request and responses are exchanged are supported.
In the case where you rely on Swagger RPC'esque documentation, that documentation is the truth on how to interact with the service. Mixing in some HATEOAS information doesn't provide you any benefits. In the Swagger case, carrying around additional meta-information that bloat up the request/response for no obvious reasons, as all the required information is given in the reference documentation, will, with some certainty, lead to people starting questioning the sanity of the developers of that service and ask for payload reduction. Just look here at SO for a while and you will find enough question asking on how to optimize the interaction further and further and reducing message size to a minimum as they process every little request and don't make use of response caching at all. In the HATEOAS case, pointing to an external reference is just useless as peers in such an architecture most likely already have support for the required necessities, such as URI, HTTP and the respective media types, implemented into it. In cases where custom media-types are used, support can be added at runtime via plug-ins or add-ons dynamically (if supported).
So, Swagger and HATEOAS are not mutually exclusive but the other gets more or less useless once you decided for one route or the other.
Swagger: Swagger aids in development across the entire API lifecycle, from design and documentation, to test and deployment. (Refer to swagger.io)
HATEOAS: Hypermedia as the Engine of Application State
An Ion Form is a Collection Object where the value member array contains Form Fields. Ion Forms ensure that resource transitions (links) that support data submissions can be discovered automatically (colloquially referred to as HATEOAS). (Refer to https://ionspec.org/)
One is a framework for supporting designing and testing for APIs, the other is an API design architecture.
Building a RESTful API is not a binary concept. That is why we use the Richardson maturity model in order to measure how RESTful an API is.
Based on this maturity model
At level 0 we provide mechanisms for client of the API to call some methods on the server (Simple RPC)
At level 1 we expose resources on the server so the client of the API can have direct access to the resources that it requires (exposing Resources)
At level 2 we provide a uniform way for the client of the api to interact with the API (exposed resources) and the HTTP protocol has these methods (using HTTP verbs to interact with resources).
the ultimate step is to make our api explorable by the client. HATEOAS provides such functionality (over HTTP) meaning that it adds relevant links and affordances (extra methods) that can executed on the resource so the client of the API can understand its behavior.
Based on these definitions in properly designed RESTful API there is no coupling between client and server and client can interact with the exposed endpoints an discover them.
On the other hand, swagger is a tool that helps you document your API along with some extra goodies (code generators).
I believe that Swagger (with the help of swagger Hub) provides services for implementing a RESTful endpoint with maturity levels up to 2. But it does not go any further and it does not provide proper support of HATEOAS.
You can define your resources and HTTP verbs in (json/yml) files. And based on this definition Swagger can generate API documentation and the extra goodies (client stubs and skeletal implementation of the server API).
For all those people who have worked with Java RMI, SOAP,... the extra goodies part is a reminder of old technologies where there was tight coupling between Client and Server because the stubs and skeletal implementations are all built based on the same API definition file.

What is difference between "Rest" API and "Graph" API

I am creating an API project in Azure AD B2C in which I want to create a custom UI. For this requirement, I want to know which is better in both "Rest" API & "Graph" API.
Can anyone suggest to me, which is better to use?
While GraphQL is often mentioned as the replacement for REST, both tackle different problems actually.
REST, to start with, is not a protocol but just a style, which, if applied correctly, just decouples clients from servers. A server following the REST principals will therefore provide the client with any information needed to take further steps. A client initially starts without any a-priori knowledge and learns on the fly through issuing requests and processing responses.
While REST is protocol agnostic, meaning it can be build up ontop of many protocols, HTTP is probably the most prominent one. A common sample for a RESTful client is the Web browser we are all to familiar with. It will start by invoking either a bookmarked URI or invoke one entered in the address bar and progress from there on.
HTTP doesn't specify the representation the request or response has to be sent in but leaves that to clients and servers negotiating them. This helps in decoupling as both client and servers can rely on the common interface (HTTP) and only bind strongly onto the known media types used to exchange data in. A peer not being able to process a document in a certain representation (due to the lack of the respective mime type support) will indicate his other peer via a respective error message. The media type, which is just a human readable documentation of the syntax and the semantics of the data payload, is therefore the most important part in a REST architecture. It teaches a peer how to parse and interpret the received payload and to actually make sense out of it, though plenty of people still confuse REST for a JSON based HTTP API with over-engineered URIs they put to much effort in to give the URI some kind of logical sense when actually neither client nor server will interpret it anyway as they will probably use the link relation name given for the URI.
GraphQL on the other hand is a query language which gives the client the power to request specific fields and elements it wants to retrieve from the server. It is, loosely speaking, some kind of SQL for the Web. It therefore has to have knowlege on the available data beforehand which couples clients somehow to the server. If the server will rename some of the fields, the client might not be able to retrieve that kind of information further, though I'm not a GraphQL expert.
As stated above, REST is often confused for a JSON based HTTP API that allows to perform queries on directly mapped DB entries/entities. Keep in mind that REST doesn't prohibit this, though its focus is on the decoupling of peers not the retrieval aspect of some Web exposed database entries.
In the context of Azure AD and its APIs, the term REST API is used when you access the Microsoft Graph service directly. You write all the http communication code, authentication, JSON parsing etc.
The term Graph API or Graph client is a reference to the Microsoft-developed Graph Client SDK which encapsulates the above.
If there is no SDK for your platform, you need to use the REST API directly. Otherwise, I would recommend to use the SDK.

Adapter Proxy for Restful APIs

this is a general 'what technologies are available' question.
My company provides a web application with a RESTful API. However, it is too slow for my needs and some of the results are in an awkward format.
I want to wrap their restful server with a proxy/adapter server, so when you connect to the proxy you get the RESTful API I wish the real one provides.
So it needs to do a few things:
passthrough most requests
cache some requests
do some extra requests on the original server to detect if a request is cacheable
for instance: there is a request for a field in a record: GET /records/id/field which might be slow, but there is a fingerprint request GET /records/id/fingerprint which is always fast. If there exists a cache of GET /records/1/field2 for the fingerprint feedbeef, then I need to check the original server still has the fingerprint feed beef before serving the cached version.
fix headers for some responses - e.g. content-type, based upon the path
do stream processing on some large content, for instance
GET /records/id/attachments/1234
returns a 100Mb log file in text format
remove null characters from files
optionally recode the log to filter out irrelevant lines, reducing the load on the client
cache the filtered version for later requests.
While I could modify the client to achieve this functionality, such code would not be re-usable for other clients (different languages), and complicates the client logic.
I had a look at whether clojure/ring could do it, and while there is a nice little proxy middleware for it, it doesn't handle streaming content as far as I can tell - the whole 100Mb would have to be downloaded. Also it doesn't include any cache logic yet.
I took a look at whether squid could do it, but I'm not familiar with the technology, and it seems mostly concerned with passing through requests rather than modifying them on the fly.
I'm looking for hints where I might find the correct technology to implement this. I'm mostly language agnostic if learning a new language gets me access to a really simple way to do it.
I believe you should choose a platform that is easier for you to implement your custom business logic on. The following web application frameworks provide easy connectivity with REST APIs, and allow you to create a web application that could work as a REST proxy:
Play framework (Java + Scala)
express + Node.js (Javascript)
Sinatra (Ruby)
I'm more familiar with Play, of which I know it provides utilities for caching you could find useful, and is also extendable by a number of plugins.
If you are familiar with Scala, you could have a also have a look at Finagle. It is a framework build be Twitter's infrastructure team to provide protocol-agnostic connectivity. It might be an overkill for REST to REST proxy, but it provides abstractions you might find useful.
You could also look at some 3rd party services like Apitools, which allows to create a proxy programmatically (in lua). Apirise is a similar service (of which I'm a co-founder) that intends to do provide similar functionalities with a user-friendly UI.
Beeceptor does exactly what you want. It plugs in-between your web-app and original API to route requests.
For your use-case of caching a few responses, you can create a rule. That way it shall not hit the original endpoint.
The requests to original APIs can be mocked, and you can inspect response
You can simulate delays.
(Note: it is a shameless plug, I am the author of Beeceptor and thought it should help you and other developers.)
https://github.com/nodejitsu/node-http-proxy is looking useful - although I don't yet know if it can stream process for transcoding.

RESTful API runtime discoverability / HATEOAS client design

For a SaaS startup I'm involved in, I am building both a RESTful web API and a couple of client apps on different platforms that consume it. I think I've got the API figured out, but now I'm turning to the clients. As I've been reading about REST, I see that a key part of REST is discovery, but there seems to be a lot of debate between two different interpretations of what discovery really means:
Developer discovery: The developer hard-codes copious amounts of API details into the client, such as resource URI's, query parameters, supported HTTP methods, and other details that they've discovered through browsing the docs and experimenting with the API's responses. This type of discovery IMHO necessitates cool linkage and the API versioning question, and leads to hard coupling of the client code to the API. Not much better than if using a well-documented collection of RPC's it seems.
Runtime discovery - The client app itself is able to figure out everything it needs with little or no out-of-band information (presumably, only a knowledge of the media types the API deals with.) Links can be hot. But to make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in. There are possibly other difficulties I haven't thought of yet since I haven't gotten to that point in development. But I do like the idea of loose coupling.
Runtime discovery seems to be the holy grail of REST, but I'm seeing precious little discussion about how to implement such a client. Almost all REST sources I've found seem to assume Developer discovery. Anyone know of some Runtime discovery resources? Best practices? Examples or libraries with real code? I'm working in PHP (Zend Framework) for one client. Objective-C (iOS) for the other.
Is Runtime discovery a realistic goal, given the present set of tools and knowledge in the developer community? I can write my client to treat all of the URI's in an opaque manner, but how to do this most efficiently is a question, especially over low-bandwidth connections. Anyway, URI's are only part of the equation. What about link templating in the Runtime context? How about communicating what methods are supported, aside from making a lot of OPTIONS requests?
This is definitely a tough nut to crack. At Google, we've implemented our Discovery Service that all our new APIs are built against. The TL;DR version is we generate a JSON Schema-like spec that our clients can parse - many of them dynamically.
That results means easier SDK upgrades for the developer and easy/better maintenance for us.
By no means the perfect solution, but many of our devs seem to like.
See link for more details (and make sure to watch the vid.)
Fascinating. What you are describing is basically the HATEOAS principle. What is HATEOAS you ask? Read this: http://en.wikipedia.org/wiki/HATEOAS
In layman's terms, HATEOAS means link following. This approach decouples your client from specific URL's and gives you the flexibility to change your API without breaking anyone.
You did your home work and you got to the heart of it: runtime discovery is holy grail. Don't chase it.
UDDI tells a poignant story of runtime discovery: http://en.wikipedia.org/wiki/Universal_Description_Discovery_and_Integration
One of the requirements that should be satisfied before you can call an API 'RESTful' is that it should be possible to write a generic client application on top of that API. With the generic client, a user should be able to access all the API's functionality. A generic client is a client application that does not assume that any resource has a specific structure beyond the structure that is defined by the media type. For example, a web browser is a generic client that knows how to interpret HTML, including HTML forms etc.
Now, suppose we have a HTTP/JSON API for a web shop and we want to build a HTML/CSS/JavaScript client that gives our customers an excellent user experience. Would it be a realistic option to let that client be a generic client application? No. We want to provide a specific look-and-feel for every specific data element and every specific application state. We don't want to include all knowledge about these presentation-specifics in the API, on the contrary, the client should define the look and feel and the API should only carry the data. This implies that the client has hard-coded coupling of specific resource elements to specific layouts and user interactions.
Is this the end of HATEOAS and thus the end of REST? Yes and no.
Yes, because if we hard-code knowledge about the API into the client, we loose the benefit of HATEOAS: server-side changes may break the client.
No, for two reasons:
Being "RESTful" is a property of the API, not of the client. As long as it is possible, in theory, to build a generic client that offers all capabilities of the API, the API can be called RESTful. The fact that clients don't obey the rules, is not the API's fault. The fact that a generic client would have a lousy user experience is not an issue. Why is it important to know that it is possible to have a generic client, if we don't actually have that generic client? This brings me to the second reason:
A RESTful API offers clients the option to choose how generic they want to be, i.e. how resilient to server-side changes they want to be. Clients which need to provide a great user experience may still be resilient to URI changes, to changes in default values and more. Clients doing batch jobs without user interaction may be resilient to other kinds of changes.
If you are interested in practical examples, checkout my JAREST paper. The last section is about HATEOAS. You will see that with JAREST, even highly interactive and visually attractive clients can be quite resilient to server-side changes, though not 100%.
I think the important point about HATEOAS is not that it is some holy grail client-side, but that it isolates the client from URI changes - it is assumed you are using known (or developer discovered custom) Link Relations that will allow the system to know which link for an object is the editable form. The important point is to use a media type that is hypermedia aware (e.g. HTML, XHTML, etc).
You write:
To make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in.
If that link template is supplied in the previous request, then there is no out-of-band information. For example a HTML search form uses link templating (/search?q=%#) to generate a URL (/search?q=hateoas), but nothing is known by the client (the web browser) other than how to use HTML forms and GET.

Using SOAP to expose CRUD operations

Is exposing CRUD operations through SOAP web services a bad idea? My instinct tells me that it is not least of which because the overhead of doing database calls overhead could be huge. I'm struggling to find documentation for/against this (anti)pattern so I was wondering if anyone could point me to some documentation or has an opinion on the matter.
Also, if anyone knows of best practises (and/or documentation to that effect) when designing soap services, that would be great.
Here's an example of how the web service would look:
Create
Delete
Execute
Fetch
Update
And here's what the implementation would look like:
[WebMethod]
public byte[] Fetch(byte[] requestData)
{
SelectRequest request = (SelectRequest)Deserialize(requestData);
DbManager crudManager = new DbManager();
object result = crudManager.Select(request.ObjectType, request.Criteria);
return Serialize(result);
}
If you want to use SOAP in a RESTful manner then there is a interesting standard for this, WS-Transfer; which provides loosely coupled CRUD endpoints; from which you inspect the message and act upon your entities accordingly.
Then you can layer whatever else you want on top, WS-Secure, WS-Reliable messaging and so on.
I think publishing a SOAP service that exposes CRUD operations to anonymous, public "users" would be a particularly bad idea. If, however, you can restrict one or both of these caveats, then I see nothing wrong with it (moreover I've implemented such services many times).
You can require, in addition to whatever method parameters your require to perform the operation, username & password parameters that in effect authenticates the originator prior to processing the request: a failure to authenticate can be signalled with the return of a SOAP exception. If you were especially paranoid, you could optionally run the service over SSL
You can have the server solution that deals with sending and receiving the requests filter based on IP, onyl allowing requests from a list of approved addresses.
Yes, there are overheads to running requests over SOAP (as opposed to exposing direct database access) - namely the processing time to wrap a request into a HTTP request, open a socket & send it (and the reverse at the receiving end and the again for the response) - but, it does have advantages.
Java (though the NetBeans IDE) and .Net (through VS), both support consumption of Web Services into projects / solutions - the biggest benefit of this is that objects / structures on the remote service are automatically translated into native objects in the consuming application, which is exceptionally handy.
If all you want to do is CRUD over the web, I'd look at some different technologies for doing REST instead of using WS*. SQL Data Services (formerly Project Astoria) might actually be a good alternative.
There is nothing wrong with exposing the CRUD operations via SOAP web-services per se.
You will obviously find quite a lot of examples for such services.
However depending on your particular requirements you might find that for you using SOAP is too much overhead or that you could be better off using use JSON/AJAX etc.
So I believe that unless you will provide additional details about your particular details there is no good answer for your question.