OTRS v4 Soap/Rest response limit to 500 results - rest

I am pulling tickets from OTRS server (I set up GenericTicketConnector and I use both Soap and Rest protocols).
For example when I try to extract all tickets from specific queue, my response is always limited to 500 results (8000+ tickets in the queue currently).
Example:
curl "https://localhost/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=user&Password=pass&Queue=ExampleQueue"
How can I get all tickets from a queue?

It is actually simple, but it is not documented in their API docs. requests pagination didn't work too. The way to do it is to pass Limit=(any integer) in the url

Related

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

Is it good idea to send request.body json values with GET HTTP request?

I am working with this third party service provider where i have to fetch / filter some data from them. The search filter parameters are complex in nature and contains too many filter params. I have tried to use querystring values and with querystring, i find it more difficult to send data since the data i have to send may contain an array of objects.
With JSON request body even with HTTP GET request, I find it extremely easy to process the request and did the testing using Insomnia REST client with ease. However POSTman REST client doesn't allow to send body parameters with GET request.
I have seen others using POST request to fetch / filter data from the api for the same purpose. POST HTTP request can be used to fetch data, but is it good from the technical standpoint? Is it recommended practice to send JSON request body values with GET request?
Not sure how much control you might have on the protocol or you have any middleware, but an HTTP GET usually doesn't have a body, I've even seen smart firewalls and hosting services strip any body by default. If you want to stay "close" to clean REST, you might consider adding a "/query" to your resource path and do a POST to that endpoint; it's a bit "RPC-ish" but not too bad. Another option would be to have a completely independent query service that could be using another protocol such as JSON-RPC.

What is http text post in webservice context?

I am having confusion around http text 'post' in terms of webservice context. We are having a web service which is built on SOAP protocol, now the integration partner wants to eliminate the SOAP portion of the XML message and wants us to post XML message as 'http text post'.
Is this REST HTTP POST? Please clarify.
POST is an HTTP request method, of which there are many (ex. GET, PUT, DELETE, HEAD...). POST is used to submit data to a server for processing, whereas GET (for example) is used to retrieve data for reading. You can read more here. These methods are used for all HTTP communication, whether the target is a SOAP/REST web service or an Apache server hosting a regular website.
SOAP normally operates using POST requests, although it is possible to use GET with SOAP 1.2 as well. GET requests have more restrictive size limitations than POST requests.

RESTful: creating multiple records in one request

I have a form that allows the user to send invites to others. The amount of invites is configurable by the user in the user interface, and could theoretically be infinite. The user needs to define an email address per invite.
When clicking 'send' it should ideally post one request to the server, wrapping all records in one bulk submit. Even though this is not truly RESTful (I heard), it seems favourable over sending possibly 50 separate requests. However, what would be the proper way to do this?
It gets tricky when one of the invites fails, due to a malformed email address or duplicate invites or so. It is fine to properly process the other valid requests, and provide errors on the invalid requests, but what response status code would one use for this?
Generally I try to use the JSONAPI request format. The errors would be in a top object called errors and would be an array consisting of multiple objects. The field key within an error object would point to the record index number (as received in the request) and field name of the error, i.e. "field": "/invites/0/email" for an error on the email field in the first received record.
The best solution I've seen to the "batch request" problem is Google Calendar's API. It is a RESTful API, and therefore there is a URL for every resource which you can manipulate using standard REST sematics (i.e. GET, POST, PUT, DELETE). But the API also exposes a "/batch" endpoint, which accepts a content-type of "mixed/multipart", and the request body contains several nested HTTP requests, each with their own headers, method, url and everything. The response is also one HTTP response with a content-type of "mixed/multipart" containing a collection of individual HTTP response, one response per request.
This advantage of this solution is that
1. It allows you to design your system in a RESTful manner, which we all know and love.
2. It generalizes well to any combination of HTTP requests that your system can deal with.
For more info see: https://developers.google.com/google-apps/calendar/batch

Curl Get Request Limit to Jetty RESTful API

What is the maximum length of a URL sent in curl to Jetty? Where is it documented, and is it configurable?
I'm implementing a RESTful API on Jetty and will be expecting requests for 1 to 600 accounts. I would like to know the limitations I'm up against. I think you can configure requestHeaderSize on a Jetty Server, is there a Max in Jetty?
If it better to just use POST instead, even though we're not posting any data to the server for update?
If you are using http GET for the data submission, then it has nothing to do with curl. Rather it depends upon the server, about how many data its expecting. For most of the servers the default value is 8K. So if your server supports this amount of data through GET, then curl can handle it as well.