I have a service connecting to an enterprise service which I have no control over. The service requires I make a call to it to initiate a ping to a device and then make subsequent calls to it to get the status. After 20 or so seconds I will get the status back.
I have been thinking of a rest pattern and just getting stuck on the fact that it is not truly restful Id like to reach out for feedback and get some opinions. I could just do a normal get /device/status and hit it over and over again? Or i could break up the call into /device/ping and /device/status or something like that. Any ideas are appreciated!
Thanks
For status, REST standards would suggest a format of '/device/{deviceId}/status'. But if the enterprise service you are connecting to does not support multiple devices; you could go for '/device/status' with 'GET' http verb.
You could use '/device/status' with 'HEAD' http verb as an exposure for the ping call
I will suggest to return the a JMS queue URL in location header in API response. Usually, in device management applications, separate JMS server is deployed. Make use of it - if it's there.
Take a hint from here.
.
Related
assume I have an API that gives a JSON response that return an id and a name.
In a mobile application normally I would make an http GET response to get this data in a one time connection with the server and display the results in the app, however if the data changes over time and I want to keep listening to this data whenever it changes how is that possible ?
I have read about sockets and seen the socket_io_client and socket_io packages, but I did not get my head around it yet, is using sockets the only way to achieve this scenario ? or is it possible to do it in a normal http request ?
Thanks for your time
What you need is not an API but a Webhook:
An API can be used from an app to communicate with myapi.com. Through that communication, the API can List, Create, Edit or Delete items. The API needs to be given instructions, though.
Webhooks, on the other hand, are automated calls from myapi.com to an app. Those calls are triggered when a specific event happens on myapi.com. For example, if a new user signs up on myapi.com, the automated call may be configured to ask the app to add a new item to a list.
is using sockets the only way to achieve this scenario ? or is it possible to do it in a normal http request ?
Sockets is only one of the ways to achieve your goal. It is possible to do it using a normal http request. Here, for example, the docs explain how to update data over the internet using HTTP.
From the flutter docs:
In addition to normal HTTP requests, you can connect to servers using WebSockets. WebSockets allow for two-way communication with a server without polling.
You'll find what you need under the networking section.
You should also take a look at the Stream and StreamBuilder classes.
I'm looking for guidance on good practices when it comes to returning errors from a REST API. I'm working on a new API so I can take it any direction.
In my case client invokes my API which internally invokes some external APIs. In case of success no problem, but in case of error responses from the far end(external cloud APIs) I am not sure what is industry standard for such services. Am currently thinking of returning 200 OK and then a json payload which details about the external API errors.
So what is the industry recommendations? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?
The failure you're asking about is one that has occurred within the internals of the service itself, though it is having external dependencies, so a 5XX status code range is the correct choice. 503 Service Unavailable looks perfect for the situation you've described.
5XX codes used for telling the client that even though the request was fine, the server has had some kind of problem fulfilling the request. On the other hand,
4XX codes are used to tell the client that it has done something wrong in request (and that the server is just fine, thanks).
Sections 10.4 and 10.5 of the HTTP 1.1 spec explain the different purposes of 4XX and 5XX codes.
Our colleagues have already provided the links / explanations about the HTTP status codes so you should learn them and find the most appropriate in your case.
I'll more concentrate on what can influence your decisions, assuming you've learnt the status codes.
Basically, You should understand what are the business implications of the flow triggered by client when he/she calls "your" API. The client doesn't know anything about the external cloud API you're working with and doesn't really care whether it works or not, the client works with your application.
If so, when the remote system returns some kind of error (and yes, different error statuses should give you a clue of what's wrong with the remote system), its your business decision about how to handle this error, and depending on this decision you might want to "behave" differently in the interaction with a client.
Here are some examples:
You know that the remote system breaks extremely rarely. But once its unavailable, you system doesn't work as well.
In this case you can might consider to retry the call to remote system if it failed. And if you still out of luck - then return some error status. Probably something like 5XX
You know that the data provided by remote client is not really important, on the other hand when the client calls your API its better to provide "something" even if its not really up-to-date than nothing. Think about the remote system that provides the "recommended movies" by some client id. And you're building a portal (netflix style). If this recommended movies service is down for some reason, it doesn't make sense to fail the whole portal page (think about the awful user experience). In this case you might want to "pre-cache" some generic list of movies, and use it as a fallback in case of failure of that remote service. In this case obviously you should return 2XX status in any case.
More advanced architecture. You know that the remote service fails often, and you can continue to work when its down. In this case maybe you will want to choose an "asynchronous" style of interaction with the client. For example: the client calls your rest and you respond immediately with an "Accepted" status code (202). You can save this id with status in some Database so that when the user "asks for status of the ticket by ticket id" you'll be able to query the DB. The point is that you return immediately. Then you might want to send the message with the task to some messaging system and once the consumer will pick the message, it will be processed and the db will be updated. As long as the remote service fails the message will get back to queue still being "unprocessed" (usually messaging systems can implement this behavior). Now at some point in time, the remote system starts responding, and all the messages get processed. Now their status in DB is "done".
So its up to client to ask "what happens" /or you can implement some push model with web sockets or something (its not REST style communication anymore in this case). But the point is that at some point in time the client will receive "OK, we're done with the ticket ID" (status 200). In this case the client can call a special endpoint and consume the stored results that you'll store in the DB as well (again status 200)
Bottom line, as you see, HTTP return codes are just an indicator, but its up to you how to organize the process of interconnection with the client and the relevant HTTP statuses will be derived from your decisions.
I would use 503 - Service Unavailable - as the error. Reason -
This is considering the case that the API operation cannot be completed without response from the external API. This is similar to my DB not responding. So my API is unavailable for service till the external service is back online.
As an API client, I am not concerned whether the API server internally invokes other APIs or not. I am just concerned with the result of the API server. So it does not matter to the client whether I am a proxy or not - hence, I would avoid 502 (Bad Gateway) and 504 (Gateway Timeout). These error can put the client into wrong assumption that the Gateway between the client and our service is causing trouble.
As suggested by #developerjack, I would also recommend to - "Include a Retry-After header so that your HTTP client knows not to spam you with retries until after X time. This means less error traffic for you, and better request planning for the client."
HTTP calls are between client and server, and so the error codes should reflect where the error or fault lies on either side of that relationship. Just because its downstream to you doesn't mean the HTTP client needs to care about that.
Given this, you should be returning a 5xx error because the fault is not with the client, its with the server (or its downstream services). Returning a 2xx (see below for caveat) would be incorrect because the HTTP call did not succeed, and a 4xx would be incorrect because it's not the client's fault.
Digging into specific 5xx's you can return:
A 504 or 502 might be appropriate if you specifically want to signal that your service is acting as a gateway/proxy.
A 523 is unofficial but used by cloudflare to specifically signal that an upstream/origin service is unreachable
A 500 (with a human and machine readable error body) is a safe default that simply indicates "there is something not right with the server and its services right now".
Now, in terms of best practice, there are some techniques you can use to either reduce the 500 errors, or make it easier on the clients to respond/react to this 5xx response.
Put in place retries within your service. If your service is working and the fault is downstream, and can successfully store the client's request to retry later when downstream services are available then you can still respond with a 2xx and the client knows that their request will be submitted. A great example of this might be a user sign up workflow; you can process the signup at your side, and then queue the welcome email to retry later if your email provider is unavailable.
Have both human descriptions, machine error codes and links in your API responses. Human descriptions are useful when debugging and developing against your service. Machine codes mean clients can index/track and code up specific code paths to a given scenario, and links to your docs mean you can alway provide more information. Even better is including any specific ID's for you to trace instances of this error in case the HTTP client needs to reach out for support (though this will be heavily dependant on your logging & telemetry). Here's an example:
{
"error_code": 1234,
"description": "X happened with Y because of Z.",
"learn_more": "https://dev.my.app/errors/1234",
"id": "90daa63b-f5ac-4c33-97d5-0801ade75a5d"
}
Include a Retry-After header so that your HTTP client knows not to spam you with retries until after X time. This means less error traffic for you, and better request planning for the client.
I want to use a RESTful API of a web service that I have. However, I really don't know how the web service knows how to "give it" to the stand alone application since it does not have an URL. Is there a mechanism that makes URLs in this case not needed?
I think you need to read up a little more on what REST is and does. By its nature REST is a mechanism for requesting data. I.e. it is a "pull" not a "push". REST is typically used over Http - hence the need for a Url, In the same way you request/pull data everytime you visit a webpage.
If you wish to notify from one system to another as soon as change happen then you need to look at something other than REST. Alternatively your client can poll the REST service continually to check its response.
I have a BizTalk application where I have exposed schema as a RESTful web service, which calls another REST service. I am able to successfully handle GET, DELETE request.
Is there a way to handle POST request without writing a pipeline component to serialize the POST request to a schema?
Also, the application may have to handle several POST calls, so will it be possible to serve this from one single receive location and then filtering the request on the send port?
Please let me know if any more details are required.
So, here's the thing. You're mixing together some things that technically have nothing to do with each other.
For instance, a Plain Old Xml (POX) service, usually a POST, does not 'expose' a Schema in the way a SOAP service does. It just takes whatever content is POSTed to it.
Following that, serialization/deserialization is also a concept more related to SOAP that POX or REST.
So...
Yes, but what exactly are you doing?
Yes. A plain http endpoint can accept any content type. Once it's over the wire, all the normal BizTalk processing rules apply.
I want my API to be be RESTful
Let say I have started a long running Task with POST and now want to be informed about the progress?
What is the idiomatic REST way to do this?
Poll with GET every 10 seconds?
The response to your
POST /new/long/running/task
should include a Location header. That header will point to an endpoint the client can hit to find out the status of the task. I would suggest your response look something like:
Location: http://my.server/task-status/15
{
"self": "/task-status/15",
"status": "running",
"expectedFinishedAt": <timestamp>
}
Then your client doesn't have to ping arbitrarily, because the server is giving it a hint as to when to check back. Later GETs to /task-status/15 will return updated timestamps. This saves you from having to blindly poll the server. Of course, this works much better if the server has some idea as to how long it will take to finish processing the task.
The way REST works, or rather the mechanism it uses - the HTTPS GET/POST/PUT/DELETE etc. doesn't provide a mechanism to have an event-driven mechanism where the server could send the data to the client. Though, it is theoretically be possible to have client/server functionality in both your server and in your client - though I wouldn't personally endorse this design. So having some sort of a submit API - POST/PUT and then a status query mechanism - GET would do the job.
The client should be the one giving you that information, showing you how many bytes have been sent already to the server. The server should not care about a partially uploaded resource.
That put aside, you will return a "Location" header indicating where is the resource once is created, but not earlier. I mean, when you POST you donĀ“t know which is going to be the address of the resource (that is indicated later in the Location header), so there is no reasonable way of providing an URL to check the status of the upload, because there is no reasonable way of identifying it till is done (you may try crazy things, but it is not recommendable).
Again, the client should give you that feedback, not the server.