I am getting 400 Bad request - Your browser sent an invalid request. for the request. The request size is 28KB.
< HTTP/1.0 400 Bad request
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>
I have following configurations in my haproxy.conf
maxconn 100000
tune.bufsize 32768
tune.maxrewrite 1024
What is the right settings to solve 400 Bad request error.
based on this link: https://www.geekersdigest.com/max-http-request-header-size-server-comparison/
looks like the request header is too big. It should be smaller than 16K, which is the default header size limit
Related
From my laptop I initiated a POST request to my web server. The HTTP POST request looks something like this (when seen via POSTMAN console)
POST /api/fwupgrade HTTP/1.1
User-Agent: PostmanRuntime/7.24.1
Accept: */*
Cache-Control: no-cache
Postman-Token: 2b1e72fa-f43b-4fc9-9058-e78533c30f0f
Host: 192.168.71.24
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Type: multipart/form-data; boundary=--------------------------572971355726244237076370
Content-Length: 222
----------------------------572971355726244237076370
Content-Disposition: form-data; name="FileName"; filename="help.txt"
<help.txt>
The content-length is indicated as 222. the file help.txt has the following characters only (for test I put 10 a)
aaaaaaaaaa
When I receive a http request on the server, I parse the request and I see the content-length as 222. Now my questions:
a) I assume this content length 222 includes the bytes after the line "Content-Length: 222" am I right? So this would mean the request body starts from
------------------572971355726244237076370
Content-Disposition: form-data; name="FileName"; filename="help.txt"
<help.txt>
Is this understanding correct?
b) Does the request body always follow the same format i.e after "Content-Length:" it begins and ends with the data of the file, in my case "help.txt"?
c) Assuming #a is correct, I calculate the actual data to be starting from the location after filename="help.txt" /r/n and then store this in a file on my server. However I get 58 surplus bytes after the aaaaaaaaaa. Any idea how am I supposed to interpret Content-length or how postman calculates the Content-length field?
Regards
a) Roughly yes.
b) It depends on the Content-Type (here: multipart/form-data)
c) You'll need a parser for multipart/form-data messages. See, for instance, https://greenbytes.de/tech/webdav/rfc7578.html
In S3 REST API, how does the PUT operation i.e. a direct upload not the multipart upload exactly send requests for such large files i.e. Gigabytes through HTTP? Is the direct upload also chunked (like the multipart upload) and has a defined size internally?
When tried doing a PUT (direct upload) operation using S3 REST API, the maximum I could upload was around 5GB which is what even Amazon says their maximum limit for direct upload is. But when tried uploading a file which larger then the limit it throws an exception "Your proposed upload exceeds the maximum allowed size" and also has a HTTP response returned where the header tag 'transfer-encoding' is 'chunked'.
Here's a randomly-selected error response from S3.
< HTTP/1.1 412 Precondition Failed
< x-amz-request-id: 207CAFB3CEXAMPLE
< x-amz-id-2: EXAMPLE/DCHbRTTnpavsMQIg/KRRnoEXAMPLEBJQrqR1TuaRy0SHEXAMPLE5otPHRZw4EXAMPLE=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Fri, 23 Jun 2017 19:51:52 GMT
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>...
The Transfer-Encoding: chunked response header only indicates that the error response body S3 is sending back to you will use chunked transfer encoding.
This is unrelated to what is permitted for uploads, and the presence of Transfer-Encoding: chunked in either direction (request or response) of an HTTP transaction is independent of whether it is present or supported in the opposite direction.
The PUT object REST API call does not support Transfer-Encoding: chunked on the request. It requires Content-Length: in the request headers, which precludes using chunked transfer encoding.
There is no chunking, blocking, etc., mechanism involved at the HTTP layer in standard uploads -- there is no meaningful internal structure "part-size," because there are no parts: it's a continuous TCP stream of un-encoded octets of exactly Content-Length length (number of octets/bytes), with retries and network errors handled by TCP, and HTTP unaware of these mechanisms.
If the Content-Length header you send exceeds the maximum allowed upload, you get the error about your proposed upload exceeding the maximum allowed size. If the connection is accidentally or intentionally severed before Content-Length number of octets are received by S3, the uploaded data is discarded, because partial objects are never created.
I am getting
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 96
Illegal request-target: Invalid input '\', expected pchar, '/', '?' or 'EOI' (line 1, column 17)
for URLs containing \, which is ok, I'd like to have 400 going forward, but I want to change the message, so it's more user-friendly.
Seems like it happens before it gets into any of controllers.
p.s. I know there is akka.http.parsing.uri-parsing-mode = relaxed, but I don't want to do it (different message is what I want:).
Update:
sample URLs causing Illegal request-target are:
http://host.host/hello\world
http://host.host/hello{all
http://host.host/hello"all
and so on
Every time I send a pretty minimal request to Parse API:
POST /1/some_url HTTP/1.1
X-Parse-Application-Id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
X-Parse-REST-API-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Content-Type: application/json
{"data":"value"}
I get the same empty response:
HTTP/1.1 400 BAD_REQUEST
Content-Length: 0
Connection: Close
And ideas about possible errors on my part?
Answering my own question. In my case an additional header was needed:
Host: api.parse.com
The 206 status code (w3.org) indicates a partial result in response to a request with a Range header.
So "clearly" if the requested document is e.g. 1024 bytes long, and the Range header is bytes=0-512 then a status code of 206 Partial Content should be returned. (Assuming that the server is able to return the content)
BUT what if the Range is bytes=0-2000?
Should 200 OK or 206 Partial Content be returned?
It seems to me that this isn't clearly defined in the specification -- or maybe I'm not reading the right place?
Why do I care?
I ask because the Varnish Cache seems to always return 206 Partial Content, whereas the Facebook Open Graph debugger seems to expect 200 OK. [1] [2]
Example: GET request to Varnish
(I receive the full document, and yet 206 Partial Content is returned)
> curl --dump-header - -H 'Range: bytes=0-7000' https://www.varnish-cache.org/sites/all/themes/varnish_d7/logo.png
HTTP/1.1 206 Partial Content
Server: nginx/1.1.19
Date: Mon, 14 Apr 2014 22:43:31 GMT
Content-Type: image/png
Content-Length: 2884
Connection: keep-alive
Last-Modified: Thu, 15 Dec 2011 12:30:46 GMT
Accept-Ranges: bytes
X-Varnish: 1979866667
Age: 0
Via: 1.1 varnish
Content-Range: bytes 0-2883/2884
Further w3 reference: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
It doesn't matter. Both replies are valid.
(also note that the current specification is now http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p5-range-26.html, to be published as RFC soon)