Curl Get Request Limit to Jetty RESTful API - rest

What is the maximum length of a URL sent in curl to Jetty? Where is it documented, and is it configurable?
I'm implementing a RESTful API on Jetty and will be expecting requests for 1 to 600 accounts. I would like to know the limitations I'm up against. I think you can configure requestHeaderSize on a Jetty Server, is there a Max in Jetty?
If it better to just use POST instead, even though we're not posting any data to the server for update?

If you are using http GET for the data submission, then it has nothing to do with curl. Rather it depends upon the server, about how many data its expecting. For most of the servers the default value is 8K. So if your server supports this amount of data through GET, then curl can handle it as well.

Related

OTRS v4 Soap/Rest response limit to 500 results

I am pulling tickets from OTRS server (I set up GenericTicketConnector and I use both Soap and Rest protocols).
For example when I try to extract all tickets from specific queue, my response is always limited to 500 results (8000+ tickets in the queue currently).
Example:
curl "https://localhost/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=user&Password=pass&Queue=ExampleQueue"
How can I get all tickets from a queue?
It is actually simple, but it is not documented in their API docs. requests pagination didn't work too. The way to do it is to pass Limit=(any integer) in the url

What is http text post in webservice context?

I am having confusion around http text 'post' in terms of webservice context. We are having a web service which is built on SOAP protocol, now the integration partner wants to eliminate the SOAP portion of the XML message and wants us to post XML message as 'http text post'.
Is this REST HTTP POST? Please clarify.
POST is an HTTP request method, of which there are many (ex. GET, PUT, DELETE, HEAD...). POST is used to submit data to a server for processing, whereas GET (for example) is used to retrieve data for reading. You can read more here. These methods are used for all HTTP communication, whether the target is a SOAP/REST web service or an Apache server hosting a regular website.
SOAP normally operates using POST requests, although it is possible to use GET with SOAP 1.2 as well. GET requests have more restrictive size limitations than POST requests.

Programatically POST'ing a form is not doing what my browser is doing. Why?

I'm trying to programmatically submit a form on a web site that I do not own. I'm trying to simulate what I would manually do with a web browser. I am issuing an HTTP POST request using an HTTP library.
For a reason that I don't know I am getting a different result (an error, a different response, ...) when I programmatically submit the form compared to a manual submission in a web browser.
How can that be and how can I find out what mistake I have made?
This question is intentionally language and library agnostic. I'm asking for the general procedure for debugging such issues.
All instances of this problem are equivalent. Here is how to resolve all of them:
The web site you are posting to cannot tell different clients apart. It cannot find out whether you are using a web browser or an HTTP library. Therefore, only what you send matters for the decision of the server on how to react.
If you observe different responses from the server this means that you are sending different requests.
A few important things that you probably have to send correctly:
URL
Verb (GET or POST)
Headers: Host, User-Agent, Content-Length
Cookies (the Cookie and Set-Cookie headers)
The request body
Use an HTTP sniffer like Fiddler to capture what you are programmatically sending and what your browser is sending. Compare the requests for differences. Eliminate the differences one by one to see which one caused the problem. You can drag an HTTP request into the Composer Window to be able to modify and reissue it.
It is impossible to still get a different result if you truly have eliminated all differences between the manual and the programmatic requests.

PHP: What is fastest SOAP, file_get_contents or Curl?

I have website A that is sending request on each page load in header to website B.
B server is doing some internal search in mysql and need to return some data to server A that will display some content based on that response.
What is the fastest way to make communication between these two servers?
The fastest and easiest method is cURL and also data returned by cURL is easy to parse.
The use of get_file_contents is purely dedicated to reading of a file, while cURL is totally dedicated for communication of data between the two servers.

Browser-based REST api authentication

I'm working on a REST webservice, and in particular authentication methods for browser-based requests. (using JsonP or Cross-domain XHR requests/XDomainRequest).
I've done some research in OAuth, and also Amazon's AWS. The big drawbacks of both is that I need to do either of the following:
Store secret tokens in the browser
Let a server-side script handle the signing. Basically I'd first to a request to a server of mine to get a specific pre-signed javascript request, which I'll use to connect to the real REST server.
What are some other options or suggestions?
Well, the only true answer here is proxying through a server, using sessions/cookies to authenticate and of course use SSL. Sorry for answering my own question.
Yes, jsonp call-authentication is tough, because the browser-client needs to know the shared secret.
An option would be to make the end-point anonymous (no authentication necessary). This comes with other security wholes (server is open for attacks, anyone can call it). But you could handle this by either only exposing very limited resource and/or using rate-limiting. With rate-limiting only a certain number of calls are allowed by one client in a certain range of time. It works by identifying the client (e.g. by source-ips or other client footprints).
I once experimented with one-time tokens, but they all somewhat failed because you have the problem of getting the token itself and protecting multiple retrievals of the token by bots (which comes again to the need of rate-limiting).
I havent tried this myself but you can try the following..(I am pretty sure i will get some feedback)
On the server side, generate a timestamp. Using HMAC-SHA256 an generate a key for that time stamp using a password and send the generated key and time stamp in the html.
When you make the AJAX call to the web service(assuming it is a different server) send the key and the time stamp along with the request. Check if timestamp is within a 5-15 minutes..
if it is do do the HMAC-SHA256 with the same password and key if the key generated is same.
Also on the client side you will have to check if your timestamp is still valid before making the call..
You can generate the key using the following url..
http://buchananweb.co.uk/security01.aspx