Wiremock - Mocking a reactive backend - wiremock

Is it possible to use Wiremock to mock a reactive backend. What I want to do is make Wiremock return chunked responses but each chunk should be a valid json string (something that mimics a reactor Flux type response).
The scenario is something like this, I have a backend sending a stream of json objects that I can consume. Each json string can be marshalled into a POJO without the need to keep track of the state (the chunk that came before). Each chunk that comes over the wire can have different lengths.
Any ideas on how I can mock such a backend?

Most, if not all, of API mocking, stubbing, faking, replacing libraries (there a wide variety of names but it is fair to refer to these as API stubbing) such as wiremock do not support response payload chunking.
You are then left with two options:
A custom hand-made implementation, where you provide chunking based on the used library and its semantics
A simple test-scoped Controller stereotype that returns a reactive (Flux) for your endpoint. Then you let the underlying framework (Spring WebFlux for example) handle the response streaming for you (the cleanest option in my opinion)
That being said, you should be good to go with a mock API where you return an iterable type which will get mapped automatically by the client to its reactive counterpart, Flux, when called. The mapping and request / response handling is low level detail, and its the responsibility of the underlying framework to map your input and output accordingly and you should not bother with how the endpoint is implemented as your client should be working in all cases the same way. It is the responsibility of the framework to interoperability after all and not the responsibility of an application developer.

Related

Single endpoint instead of API - what are the disadvantages?

I have a service, which is exposed over HTTP. Most of traffic input gets into it via single HTTP GET endpoint, in which the payload is serialized and encrypted (RSA). The client system have common code, which ensures that the serialization and deserialization will succeed. One of the encoded parameters is the operation type, in my service there is a huge switch (almost 100 cases) that checks which operation is performed and executes the proper code.
case OPERATION_1: {
operation = new Operation1Class(basicRequestData, serviceInjected);
break;
}
case OPERATION_2: {
operation = new Operation2Class(basicRequestData, anotherServiceInjected);
break;
}
The endpoints have a few types, some of them are typical resource endpoints (GET_something, UPDATE_something), some of them are method based (VALIDATE_something, CHECK_something).
I am thinking about refactoring the API of the service so that it is more RESTful, especially in the resource-based part of the system. To do so I would probably split the endpoint into the proper endpoints (e.g. /resource/{id}/subresource) or RPC-like endpoints (/validateSomething). I feel it would be better, however I cannot make up any argument for this.
The question is: what are the advantages of the refactored solution, and what follows: what are the disadvantages of the current solution?
The current solution separates client from server, it's scalable (adding new endpoint requires adding new operation type in the common code) and quite clear, two clients use it in two different programming languages. I know that the API is marked as 0-maturity in the Richardson's model, however I cannot make up a reason why I should change it into level 3 (or at least level 2 - resources and methods).
Most of traffic input gets into it via single HTTP GET endpoint, in which the payload is serialized and encrypted (RSA)
This is potentially a problem here, because the HTTP specification is quite clear that GET requests with a payload are out of bounds.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
It's probably worth taking some time to review this, because it seems that your existing implementation works, so what's the problem?
The problem here is interop - can processes controlled by other people communicate successfully with the processes that you control? The HTTP standard gives us shared semantics for our "self descriptive messages"; when you violate that standard, you lose interop with things that you don't directly control.
And that in turn means that you can't freely leverage the wide array of solutions that we already have in support of HTTP, because you've introduce this inconsistency in your case.
The appropriate HTTP method to use for what you are currently doing? POST
REST (aka Richardson Level 3) is the architectural style of the world wide web.
Your "everything is a message to a single resource" approach gives up many of the advantages that made the world wide web catastrophically successful.
The most obvious of these is caching. "Web scale" is possible in part because the standardized caching support greatly reduces the number of round trips we need to make. However, the grain of caching in HTTP is the resource -- everything keys off of the target-uri of a request. Thus, by having all information shared via a single target-uri, you lose fine grain caching control.
You also lose safe request semantics - with every message buried in a single method type, general purpose components can't distinguish between "effectively read only" messages and messages that request that the origin server modify its own resources. This in turn means that you lose pre-fetching, and automatic retry of safe requests when the network is unstable.
In all, you've taken a rather intelligent application protocol and crippled it, leaving yourself with a transport protocol.
That's not necessarily the wrong choice for your circumstances - SOAP is a thing, after all, and again, your service does seem to be working as is? which implies that you don't currently need the capabilities that you've given up.
It would make me a little bit suspicious, in the sense that if you don't need these things, why are you using HTTP rather than some messaging protocol?

Which verb to use for a REST request which sends data and gets data back?

Search - request contains query parameters e.g. search term and pagination values. No changes/data is persisted to backend.
I currently use GET with query parameters here.
Data conversion - request contains data in format A and server sends data in format B. No changes/data is persisted to backend.
I currently use POST with request parameters here.
For your Data Conversion use case (which seems to be more of a function that working with a representation of something on the server), the answer is more grounded in higher-level HTTP verb principles than RESTful principles. Both cases are non-idempotent: they make no changes to the server, so GET should be used.
This question has a good discussion of the topic, especially this comment:
REST and function don't go well together. If an URL contains function, method, or command, I smell RPC – user1907906
Search - request contains query parameters e.g. search term and pagination values. No changes/data is persisted to backend.
If the request is supposed to generate no changes on the back end, then you are describing a request which is safe, so you should choose the most suitable safe method - GET if you care about the representation, HEAD if you only care about the meta data.
Data conversion - request contains data in format A and server sends data in format B. No changes/data is persisted to backend.
Unless you can cram the source representation into the URL, POST is your only reasonable choice here. There is no method in HTTP for "this is a safe method with a payload".
In practice, you could perhaps get away with using PUT rather than POST -- it's an abuse of the uniform interface, but one that allows you to communicate at least the fact that the semantics are idempotent. The key loophole is:
there is no guarantee that such a state change will be observable, since the target resource might be acted upon by other user agents in parallel, or might be subject to dynamic processing by the origin server, before any subsequent GET is received. A successful response only implies that the user agent's intent was achieved at the time of its processing by the origin server.

Choosing a Websocket REST Paradigm

Since REST is an architectural style, not a protocol, it can be applied to most any protocol form - such as websockets. That's exactly what I'd like to do, but I'd like help deciding on an approach.
As I do my research, I'm finding that there are three paradigms I could follow:
Simulated Request-Response. This is what is implemented in the SwaggerSocket library. Each client request has an ID. Each server-push response has the same ID, to allow correlation of request-response.
Notification-Only. The server pushes a resource address, implying that the client should perform a GET on that resource to discover what has changed.
Event Driven. The server-push is designed to look like an HTTP POST, perhaps similar to a webhook request.
I'd like to hear from those who have experience walking these paths, about which they found to be the most effective, and which tools they applied (such as SwaggerSocket mentioned above).
A major concern I have is simplified demuxing and de-serialization. For example, suppose I have a client written in Typescript. I might like to deserialize server-push payload into a declared, typed object. I think the consequences for each paradigm are as follows:
Simulated Request-Response (SRR). The SwaggerSocket message must be de-serialized twice. First to discover the response ID and/or "path", and second to retrieve the actual payload into a "typed" object. A little clumsy, but doable.
Notification-Only. The server-push message can be deserialized into a single pre-defined type, since it contains little more than a REST resource path.
Event Driven. If the "event" has no payload, then this is basically the same thing as the Notification-Only approach. But if there is a payload, then once again a two-step deserialization would likely be necessary.
Other thoughts I have: The SRR might be the most limiting of the three, because every server-push theoretically is instigated by a client request. The other two paradigms don't have that implicit model. The Event Driven approach has the conceptual advantage of being similar-ish to a webhook callback.
To illustrate the Event Driven / webhook idea, I'll give a SignalR example.
//client side.
hubProxy.On<HttpRequest>("EventNotice", request => {
//Take apart the HttpResponse and dispatch through
//my own routing mechanism...to handlers that
//further deserialize the inner payload.
});
//The corresponding server-push would of course be:
_context.Clients.All.EventNotice(myHttpRequest);
The above example is very achievable. In fact, I would not be surprised somebody has example code (please share!) or even a supporting library for this purpose.
Again, of these different paradigms, which would you advise? What supporting tools would you suggest?

How to define transforms on a resource in a REST way?

I'm designing a REST api, following best practices, including a form of hypermedia/hateoas. I'm using jsonapi for the design guidelines, which seems to be pretty complete.
Currently, I have a need for:
combining 2 resources in a response (a Resource A and a related Resource B). I do this using the Compound Documents - structure as specified in jsonapi. Or also commonly known as resource expansion
formatting the result of 1. in a specialized way so it can be readily consumed by a specialized client that expects said formatting.
My problem is with 2. How do I correctly represent this in a REST-way? It seems I may need a separate endpoint, but that wouldn't be 'RESTy', since that implies a separate resource, while it's just a transformation of the output of the same resource.
Any references on how to do this?
You could use a header or a query param to handle this.
When the client needs specific formatting, they could add an additional header to the request something like Format:Indented or something like http:\\myapp.com\resouces\myresource?format=indented
Or if the server is formatting and wants the client to know that the response is pre-formatted, the server could add a Format response header to notify the client that response is formatted.

In GWT RPC, How to send raw deserialized response object to client?

I know this is a bit hacky but current circumstances can't allow me to rewrite certain aspects of the application.
rpcService.someServiceCall(String someParameter,
new AsyncCallback<LargeClientObject>(){
Basically, we have a very large response from the server to client called LargeClientObject. The deserialization on the client side is taking a very long time to unmarshal. I was wondering what the best way would be to send deserialized data (raw JSON) to the client so that the client doesn't have to waste time deserializing it.
I was wondering if there was a way to simply do:
rpcService.someServiceCall(String someParameter,new ASyncCallback_WithNoClientSerialization<LargeClientObject>(){
FYI,I've tried using RequestFactory to load ClientObjects but it has many custom objects which would take forever to write RequestProxies for and I'd have to refactor most of the existing application.
I think you may consider two approaches.
A. Call a servlet to get a,JSON response without using RPC.
B. Use the existing RPC service but change the return type to String instead of LargeClientObject, and pass a JSON string.
You probably have to test which approach works better.