I'm calling a third party upload document rest API from my spring controller, and I'm passing some fields in the request to the API where the upload document API returns a response, but that response doesn't have any of the values which I had passed to it.
I need to update my database with the response received.
In Single threaded mode it's OK, but how should I save it in case of concurrent access as sometimes it may be like one response may get saved for another user.
The third party API is rejecting to send back any of the values I pass in the request.
Because it's an official system, I can't provide any code.
But still, we are using a REST template to call the API which will have request parameters like user id, request number, and file as a byte array. And the response that we get is just the file name, and the status as doc upload successful. So when accessed concurrently, it can happen that the status and filename that we get in response will be saved against some other user.
Please advise how can I make my code thread safe as the rest API doesn't send any of the values in response.
Related
I'm trying to write an API that would receive a PDF file, process it, and send the results back to the user within the same request.
I'm confused as for which request should be used for this task, as the user is trying to GET a response from the server, but they also POST a file.
In this case, should/can I add a PDF file as a parameter to the GET request, or should I use a POST request - but if the latter, how does the user get the processed result?
GET is usually used to get info that is already on the server, while POST is to send information to the server, and the server responds based on that information.
I think your question should be more focused on whether to use POST or PUT. Take a look at this guide, and act according to your case.
How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.
I'm trying to build a simple Email Verification API. Below you can find the expected client requests in order:
The client gets an email address as an input. (e.g. mail#example.com)
The client sends a request: GET /emails/?email=mail#example.com
If mail#example.com has not been created before, meaning the previous request returns an empty list as a response, it sends a request: POST /emails/ where email#example.com is in the request body parameters.
The client sends a request: POST /email-verifications/ with email_id in the request body and creates a new email verification object. Upon successful creation, the client receives a token in the response body and 6-digit verification code is sent to the corresponding email address.
Now the client gets verification code as an input from the user.
The client sends a request: PATCH /email-verifications/id/ with token and code in the request body.
I'm not exactly sure about the last step since the corresponding update operation receives two inputs as token and code that won't be updated in the instance. Rather, they will be compared with the existing instance and upon success another field is_verified will be updated.
Is this a right way to implement such operation? Or are there any better practices that I can follow?
PATCH is often not the perfect fit for things, and I think that you probably also shouldn't be using it here.
We've ran into a similar issue as you did and wondered how to design it. In our example it wasn't a token and code but it was an API for changing a users password.
Also in our case, a client would send a new password to a server but the server would never return the password.
The most appropriate solution for us ended up being a special password resource, with a url like:
/users/x/password
A GET request on this url would always yield a 403, and only a PUT request will be supported here. I kinda have the feeling that your design problem should be solved the same way.
In my system, if I GET an endpoint api/businesses/1, details about a business is returned (Address, Opening Hours etc.) as JSON. If an access token is passed in the header of the request, then the server can identify the user making the request, and can supplement the returned data with user-specific data (Address, Opening Hours, PLUS whether the user has bookmarked this business).
My question is - should authenticated/non-authenticated properties be returned from one request like this, or should they be split into two separate requests? (/api/business/1 for Address and Opening Hours, api/user/123/bookmarks for the user's bookmarked businesses). The latter approach means that I can cache the first request response, which would be useful.
In this case it could be better to split it into two methods /api/business/1 and
api/user/123/bookmarks/
Reasons for that:
It makes API cleaner - each API method does well defined job
It is easier to test your API, because you'll get rid of the state here (by the state I mean using token to get a user). So by passing the same business/user id you can expect to always getthe same result
Yes, you can cache it
I have a form that allows the user to send invites to others. The amount of invites is configurable by the user in the user interface, and could theoretically be infinite. The user needs to define an email address per invite.
When clicking 'send' it should ideally post one request to the server, wrapping all records in one bulk submit. Even though this is not truly RESTful (I heard), it seems favourable over sending possibly 50 separate requests. However, what would be the proper way to do this?
It gets tricky when one of the invites fails, due to a malformed email address or duplicate invites or so. It is fine to properly process the other valid requests, and provide errors on the invalid requests, but what response status code would one use for this?
Generally I try to use the JSONAPI request format. The errors would be in a top object called errors and would be an array consisting of multiple objects. The field key within an error object would point to the record index number (as received in the request) and field name of the error, i.e. "field": "/invites/0/email" for an error on the email field in the first received record.
The best solution I've seen to the "batch request" problem is Google Calendar's API. It is a RESTful API, and therefore there is a URL for every resource which you can manipulate using standard REST sematics (i.e. GET, POST, PUT, DELETE). But the API also exposes a "/batch" endpoint, which accepts a content-type of "mixed/multipart", and the request body contains several nested HTTP requests, each with their own headers, method, url and everything. The response is also one HTTP response with a content-type of "mixed/multipart" containing a collection of individual HTTP response, one response per request.
This advantage of this solution is that
1. It allows you to design your system in a RESTful manner, which we all know and love.
2. It generalizes well to any combination of HTTP requests that your system can deal with.
For more info see: https://developers.google.com/google-apps/calendar/batch