I know this is a bit hacky but current circumstances can't allow me to rewrite certain aspects of the application.
rpcService.someServiceCall(String someParameter,
new AsyncCallback<LargeClientObject>(){
Basically, we have a very large response from the server to client called LargeClientObject. The deserialization on the client side is taking a very long time to unmarshal. I was wondering what the best way would be to send deserialized data (raw JSON) to the client so that the client doesn't have to waste time deserializing it.
I was wondering if there was a way to simply do:
rpcService.someServiceCall(String someParameter,new ASyncCallback_WithNoClientSerialization<LargeClientObject>(){
FYI,I've tried using RequestFactory to load ClientObjects but it has many custom objects which would take forever to write RequestProxies for and I'd have to refactor most of the existing application.
I think you may consider two approaches.
A. Call a servlet to get a,JSON response without using RPC.
B. Use the existing RPC service but change the return type to String instead of LargeClientObject, and pass a JSON string.
You probably have to test which approach works better.
Related
Is it possible to use Wiremock to mock a reactive backend. What I want to do is make Wiremock return chunked responses but each chunk should be a valid json string (something that mimics a reactor Flux type response).
The scenario is something like this, I have a backend sending a stream of json objects that I can consume. Each json string can be marshalled into a POJO without the need to keep track of the state (the chunk that came before). Each chunk that comes over the wire can have different lengths.
Any ideas on how I can mock such a backend?
Most, if not all, of API mocking, stubbing, faking, replacing libraries (there a wide variety of names but it is fair to refer to these as API stubbing) such as wiremock do not support response payload chunking.
You are then left with two options:
A custom hand-made implementation, where you provide chunking based on the used library and its semantics
A simple test-scoped Controller stereotype that returns a reactive (Flux) for your endpoint. Then you let the underlying framework (Spring WebFlux for example) handle the response streaming for you (the cleanest option in my opinion)
That being said, you should be good to go with a mock API where you return an iterable type which will get mapped automatically by the client to its reactive counterpart, Flux, when called. The mapping and request / response handling is low level detail, and its the responsibility of the underlying framework to map your input and output accordingly and you should not bother with how the endpoint is implemented as your client should be working in all cases the same way. It is the responsibility of the framework to interoperability after all and not the responsibility of an application developer.
I want to retrieve data about a bunch of resources. Let's say an Array of book id and the response is JSON Array of book objects. I want to send the request payload as JSON to the server.
Should I use GET and POST method?
Note:
I don't want to make multiple GET request for each book ID.
POST seems to be confusing as it is supposed to be used only when the request creates a resource or modifies the server state.
I want to retrieve data about a bunch of resources. Let's say an Array of book id and the response is JSON Array of book objects.
If you are thinking about passing the array of book id as the message body of the HTTP Request, then GET is a bad idea.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
You should use POST instead
POST seems to be confusing as it is supposed to be used only when the request creates a resource or modifies the server state.
That's not quite right. POST can be used for anything -- see GraphQL or SOAP. But what you give up by using POST is the ability of intermediate components to participate in the conversation.
For example, for cases that are effectively read-only, you would like to use a safe method, because that allows pre-caching optimization, and automated retry of lost responses on an unreliable network. POST doesn't have extra semantic constraints, so you lose out.
What HTTP really wants is that you GET using the URI; this can be done in one of two relatively straightforward ways:
POST the ids to the server, to create a new resource (meaning that the server retains for itself a copy of the list of ids), and receive a new resource identifier back in exchange. Then GET using this new identifier any time you want to know the current representation of the results.
Encode the information you need into the URI itself. Most commonly, this is done using the query part of the URI, although that isn't strictly necessary. The downside here is that if the URI encoded representation of the array of ids is very long, you may have trouble with some implementations that enforce arbitrary URI limits.
There aren't always great answers:
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
If I understand correctly, you want to get a list of all of the items in a list, in one pull. This would be possible using GET, as REST returns the JSON it can by default be up to 100 items, and you can get more items if needed by specifying $top.
As far as writing back or to the server, POST would be what your looking for, this to my understanding would need to be one for one.
you are going to use a GET-Request and put your request-data (book-id array) in the data-section of your ajax (or whatever you're going to use) request. See How to pass parameters in GET requests with jQuery
I am in a situation where in I want to REST GET only the objects that I require by addressing them with one of the parameters that I know about those objects.
E.g., I want to GET all the USERS in my system with ids 111, 222, 333.
And the list can be bigger, so I don't think it is best way to append the URL with what is required, but use payload with json.
But I am skeptical using JSON payload in a GET request.
Please suggest a better practice in the REST world.
I am skeptical using JSON payload in a GET request.
Your skepticism is warranted; here's what the HTTP specification has to say about GET
A payload within a GET request message has no defined semantics
Trying to leverage Undefined Behavior is a Bad Idea.
Please suggest a better practice in the REST world.
The important thing to recognize is that URI are identifiers; the fact that we sometimes use human readable identifiers (aka hackable URI) is a convenience, not a requirement.
So instead of a list of system ids, the URI could just as easily be a hash digest of the list of system ids (which is probably going to be unique).
So your client request would be, perhaps
GET /ea3279f1d71ee1e99249c555f3f8a8a8f50cd2b724bb7c1d04733d43d734755b
Of course, the hash isn't reversible - if there isn't already agreement on what that URI means, then we're stuck. So somewhere in the protocol, we're going to need to make a request to the server that includes the list, so that the server can store it. "Store" is a big hint that we're going to need an unsafe method. The two candidates I would expect to see here are POST or PUT.
A way of thinking about what is going on is that you have a single resource with two different representations - the "query" representation and the "response" representation. With PUT and POST, you are delivering to the server the query representation, with GET you are retrieving the response representation (for an analog, consider HTML forms - we POST application/x-www-form-urlencoded representations to the server, but the representations we GET are usually friendlier).
Allowing the client to calculate a URI on its own and send a message to it is a little bit RPC-ish. What you normally do in a REST API is document a protocol with a known starting place (aka a bookmark) and a sequence of links to follow.
(Note: many things are labeled "REST API" that, well, aren't. If it doesn't feel like a human being navigating a web site using a browser, it probably isn't "REST". Which is fine; not everything has to be.)
But I believe POST or PUT are for some requests that modify the data. Is it a good idea to use query requests with them ?
No, it isn't... but they are perfect for creating new resources. Then you can make safe GET calls to get the current representation of the resource.
REST (and of course HTTP) are optimized for the common case of the web: large grain hypermedia, caching, all that good stuff. Various use cases suffer for that, one of which is the case of a transient message with safe semantics and a payload.
TL;DR: if you don't have one of the use cases that HTTP is designed for, use POST -- and come to terms with the fact that you aren't really leveraging the full power of HTTP. Or use a different application - you don't have to use HTTP if its a bad fit.
I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number
I am working on a small client server program to collect orders. I want to do this in a "REST(ful) way".
What I want to do is:
Collect all orderlines (product and quantity) and send the complete order to the server
At the moment I see two options to do this:
Send each orderline to the server: POST qty and product_id
I actually don't want to do this because I want to limit the number of requests to the server so option 2:
Collect all the orderlines and send them to the server at once.
How should I implement option 2? a couple of ideas I have is:
Wrap all orderlines in a JSON object and send this to the server or use an array to post the orderlines.
Is it a good idea or good practice to implement option 2, and if so how should I do it.
What is good practice?
I believe that another correct way to approach this would be to create another resource that represents your collection of resources.
Example, imagine that we have an endpoint like /api/sheep/{id} and we can POST to /api/sheep to create a sheep resource.
Now, if we want to support bulk creation, we should consider a new flock resource at /api/flock (or /api/<your-resource>-collection if you lack a better meaningful name). Remember that resources don't need to map to your database or app models. This is a common misconception.
Resources are a higher level representation, unrelated with your data. Operating on a resource can have significant side effects, like firing an alert to a user, updating other related data, initiating a long lived process, etc. For example, we could map a file system or even the unix ps command as a REST API.
I think it is safe to assume that operating a resource may also mean to create several other entities as a side effect.
Although bulk operations (e.g. batch create) are essential in many systems, they are not formally addressed by the RESTful architecture style.
I found that POSTing a collection as you suggested basically works, but problems arise when you need to report failures in response to such a request. Such problems are worse when multiple failures occur for different causes or when the server doesn't support transactions.
My suggestion to you is that if there is no performance problem, for example when the service provider is on the LAN (not WAN) or the data is relatively small, it's worth it to send 100 POST requests to the server. Keep it simple, start with separate requests and if you have a performance problem try to optimize.
Facebook explains how to do this: https://developers.facebook.com/docs/graph-api/making-multiple-requests
Simple batched requests
The batch API takes in an array of logical HTTP requests represented
as JSON arrays - each request has a method (corresponding to HTTP
method GET/PUT/POST/DELETE etc.), a relative_url (the portion of the
URL after graph.facebook.com), optional headers array (corresponding
to HTTP headers) and an optional body (for POST and PUT requests). The
Batch API returns an array of logical HTTP responses represented as
JSON arrays - each response has a status code, an optional headers
array and an optional body (which is a JSON encoded string).
Your idea seems valid to me. The implementation is a matter of your preference. You can use JSON or just parameters for this ("order_lines[]" array) and do
POST /orders
Since you are going to create more resources at once in a single action (order and its lines) it's vital to validate each and every of them and save them only if all of them pass validation, ie. you should do it in a transaction.
I've actually been wrestling with this lately, and here's what I'm working towards.
If a POST that adds multiple resources succeeds, return a 200 OK (I was considering a 201, but the user ultimately doesn't land on a resource that was created) along with a page that displays all resources that were added, either in read-only or editable fashion. For instance, a user is able to select and POST multiple images to a gallery using a form comprising only a single file input. If the POST request succeeds in its entirety the user is presented with a set of forms for each image resource representation created that allows them to specify more details about each (name, description, etc).
In the event that one or more resources fails to be created, the POST handler aborts all processing and appends each individual error message to an array. Then, a 419 Conflict is returned and the user is routed to a 419 Conflict error page that presents the contents of the error array, as well as a way back to the form that was submitted.
I guess it's better to send separate requests within single connection. Of course, your web-server should support it
You won't want to send the HTTP headers for 100 orderlines. You neither want to generate any more requests than necessary.
Send the whole order in one JSON object to the server, to: server/order or server/order/new.
Return something that points to: server/order/order_id
Also consider using CREATE PUT instead of POST