According to the testing section of the docs, I can verify that a particular payload has been sent to a stream. If that's a POJO message that gets converted into JSON in the end though, I can't seem to find native support for asserting that JSON. Instead, I can only access the payload in the form of that POJO and do all sorts of assertions on it.
Currently, I just have a separate set of tests to ensure that a particular POJO type gets serialized into JSON in a way that I expect. But maybe there's a built-in support for testing the whole thing from calling a method to verifying the final JSON that ends up in Kafka.
Related
Is it possible to use Wiremock to mock a reactive backend. What I want to do is make Wiremock return chunked responses but each chunk should be a valid json string (something that mimics a reactor Flux type response).
The scenario is something like this, I have a backend sending a stream of json objects that I can consume. Each json string can be marshalled into a POJO without the need to keep track of the state (the chunk that came before). Each chunk that comes over the wire can have different lengths.
Any ideas on how I can mock such a backend?
Most, if not all, of API mocking, stubbing, faking, replacing libraries (there a wide variety of names but it is fair to refer to these as API stubbing) such as wiremock do not support response payload chunking.
You are then left with two options:
A custom hand-made implementation, where you provide chunking based on the used library and its semantics
A simple test-scoped Controller stereotype that returns a reactive (Flux) for your endpoint. Then you let the underlying framework (Spring WebFlux for example) handle the response streaming for you (the cleanest option in my opinion)
That being said, you should be good to go with a mock API where you return an iterable type which will get mapped automatically by the client to its reactive counterpart, Flux, when called. The mapping and request / response handling is low level detail, and its the responsibility of the underlying framework to map your input and output accordingly and you should not bother with how the endpoint is implemented as your client should be working in all cases the same way. It is the responsibility of the framework to interoperability after all and not the responsibility of an application developer.
I am sending messages to ASB using WCF NetMessaging.
The message can contain any number of custom data contracts.
I have a Service Fabric stateless service with a custom listener for ASB delivering messages pushed onto the queue. All examples I’ve seen are able to only handle a single type of message (seems most guidance is to serializable to JSON but that’s not what I need to do here). I want the subscriber to the queue be able to handle a number of messages (any message sent to any action of the service).
I am able to add the Action to the BrokeredMessage.Properties so I know where to send it. The problem is I haven’t figured out how deserialize the message body in any way that works.
I can read it from a stream and get it to a string, but can’t do this:
var myDTO = message.GetBody();
That throws serialization exceptions. I’ve also tried a variant of that passing in a DataContractSerializer - even though I think that is the default.
Furthermore, what I really need is a way to do this without knowing the type of data in the body - I could, conceivably, add more message.Properties for the types serialized in the body but I figure there has to be a direct way to do it using only the data in the body - after all WCF and similar techs do this with ease. But how?
Thanks for any help,
Will
To have a stand-alone message body:
Create an envelope type that describes the content (Type name, sender, timestamp, etc.), and holds a payload (string) property to contain the serialized object.
To send out messages, you serialize (compress, encrypt) the object, assign the result to the payload property of an Envelope instance. Serialize the Envelope and send that out.
To receive messages, deserialize the message body into an Envelope, examine the type information, and deserialzie the payload.
This is more or less how SOAP based WCF services do/did it.
Make sure your DTO is datacontract-serializable, by creating some unit tests.
Keep in mind that the message body size is limited in ASB, XML may not be your best choice of serialization.
You may also be hitting this issue.
so I finished a server in Node using Express (developed through testing) and while developing my frontend I realized that Java doesn't allow any body payload in GET requests. I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET. So if not this, then what's the best way to put this into a GET request. The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
For example I have a request with this data
{
token,
filter: {
author_id,
id
},
limit,
offset
}
I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET.
Right - the problem is that there are no defined semantics, which means that even if you control both the client and the server, you can't expect intermediaries participating in the discussion to cooperate. See RFC-7231
The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
The HTTP POST method is the appropriate way to deliver a payload to a resource. POST is the most general method in the HTTP vocabulary, it covers all use cases, even those covered by other cases.
What you lose in POST is the fact that the request is safe and idempotent, and you don't get any decent caching behavior.
On the other hand, if the JSON document is being used to constrain the representation that is returned by the resource, then it is correct to say that the JSON is part of the identifier for that document, in which case you encode it into the query
/some/hierarchical/part?{encoded json goes here}
This gives you back the safe semantic, supports caching, and so on.
Of course, if your json structures are complicated, then you may find yourself running into various implicit limits on URI length.
I found some interesting specs for GET that allow more complex objects to be posted (such as arrays and objects with properties inside). Many frameworks that support GET queries seem to parse this.
For arrays, redefine the field. For example for the array ids=[1,2,3]
test.com?ids=1&ids=2&ids=3
For nested objects such as
{
filter.id: 5,
filter.post: 2
}
test.com?filter[id]=5&filter[post]=2
Search - request contains query parameters e.g. search term and pagination values. No changes/data is persisted to backend.
I currently use GET with query parameters here.
Data conversion - request contains data in format A and server sends data in format B. No changes/data is persisted to backend.
I currently use POST with request parameters here.
For your Data Conversion use case (which seems to be more of a function that working with a representation of something on the server), the answer is more grounded in higher-level HTTP verb principles than RESTful principles. Both cases are non-idempotent: they make no changes to the server, so GET should be used.
This question has a good discussion of the topic, especially this comment:
REST and function don't go well together. If an URL contains function, method, or command, I smell RPC – user1907906
Search - request contains query parameters e.g. search term and pagination values. No changes/data is persisted to backend.
If the request is supposed to generate no changes on the back end, then you are describing a request which is safe, so you should choose the most suitable safe method - GET if you care about the representation, HEAD if you only care about the meta data.
Data conversion - request contains data in format A and server sends data in format B. No changes/data is persisted to backend.
Unless you can cram the source representation into the URL, POST is your only reasonable choice here. There is no method in HTTP for "this is a safe method with a payload".
In practice, you could perhaps get away with using PUT rather than POST -- it's an abuse of the uniform interface, but one that allows you to communicate at least the fact that the semantics are idempotent. The key loophole is:
there is no guarantee that such a state change will be observable, since the target resource might be acted upon by other user agents in parallel, or might be subject to dynamic processing by the origin server, before any subsequent GET is received. A successful response only implies that the user agent's intent was achieved at the time of its processing by the origin server.
I know this is a bit hacky but current circumstances can't allow me to rewrite certain aspects of the application.
rpcService.someServiceCall(String someParameter,
new AsyncCallback<LargeClientObject>(){
Basically, we have a very large response from the server to client called LargeClientObject. The deserialization on the client side is taking a very long time to unmarshal. I was wondering what the best way would be to send deserialized data (raw JSON) to the client so that the client doesn't have to waste time deserializing it.
I was wondering if there was a way to simply do:
rpcService.someServiceCall(String someParameter,new ASyncCallback_WithNoClientSerialization<LargeClientObject>(){
FYI,I've tried using RequestFactory to load ClientObjects but it has many custom objects which would take forever to write RequestProxies for and I'd have to refactor most of the existing application.
I think you may consider two approaches.
A. Call a servlet to get a,JSON response without using RPC.
B. Use the existing RPC service but change the return type to String instead of LargeClientObject, and pass a JSON string.
You probably have to test which approach works better.