How to increase the request header size limit Haproxy? - haproxy

How to increase the header size limit in haproxy, currently the default settings allowed only 8KB.
Due to this we are facing 400 error at client side.
localhost haproxy[21502]: xx.xx.xx.xx:xxxx [xx/xxx/xxxx:xx:xx:xx.xx] www-http www-http/ -1/-1/-1/-1/0 400 187 - - PR-- 411/23/0/0/0 0/0 ""

You can set the tune.maxrewrite to a higher value.
It is strongly recommended that the header should be lower then the 8KB.
Here a link which shows the header limits on different servers Maximum HTTP request header size defaults compared across web servers

Related

HAProxy returns 414 Error Code with very long URL

I made GET request with very long URL (size 12k) to Haproxy server and got 400 Bad request. Then i set
tune.bufsize 65536
as suggested here: haproxy and large GET requests.
Now server return 414 Request-URI Too Long.
Looks like you have also to increase the servers request uri length.
For example for nginx https://stackoverflow.com/a/69430750/6778826

AWS API Gateway 10MB payload compression

I have a GET endpoint that sometimes returns a JSON payload larger than 10MB, thus giving me error code 500 as follows:
Execution failed due to configuration error:
Integration response of reported length 10507522 is larger than allowed maximum of 10485760 bytes.
I was aware of the 10MB payload quota, so I enabled Content Encoding in the Settings and added the HTTP header Accept-Encoding:gzip to the request.
Now, payloads that were 7-8MB uncompressed are being sent with a size of about 350KB, so the compression works. Still, payloads over 10MB continue to give that same error.
Why does it keep giving me this error? Is it checking payload size before compression? How can I fix this?

Reading only one message from the topic using REST Proxy

I use Kafka version 2.2.0cp2 through Rest Proxy (in the Docker container). I need the consumer to always read only one message.
I set the value max.poll.records=1 in the file /etc/kafka/consumer.properties as follows:
consumer.max.poll.records=1
OR:
max.poll.records=1
It had no effect.
Setting this value in other configs also did not give any result.
So consumer.properties is not read from the REST Proxy
Assuming consumer properties can be changed, the kafka-rest container env-var would be KAFKA_REST_CONSUMER_MAX_POLL_RECORDS, but that setting only controls the inner poll loop of the Proxy server, not the returned amount of data to the HTTP client...
There would have to be a limit flag given to the API, which does not exist - https://docs.confluent.io/current/kafka-rest/api.html#get--consumers-(string-group_name)-instances-(string-instance)-records
I don't see any consumer poll setting mentioned in the below link
https://docs.confluent.io/current/kafka-rest/config.html
But if you know the average message size you can pass max_bytes as below to control record size
GET
/consumers/testgroup/instances/my_consumer/records?timeout=3000&max_bytes=300000
HTTP/1.1
max_bytes:
The maximum number of bytes of unencoded keys and values that should
be included in the response. This provides approximate control over
the size of responses and the amount of memory required to store the
decoded response. The actual limit will be the minimum of this setting
and the server-side configuration consumer.request.max.bytes. Default
is unlimited

jute.maxbuffer affects only incoming traffic

Does this value only affect incoming traffic? If i set this value to say 4MB on zookeeper server as well as zookeeper client and I start my client, will I still get data > 4MB when I do a request for a path /abc/asyncMultiMap/subs. If /subs has data greater than 4MB is the server going to break it up in chunks <= 4MB and send it in pieces to the client?
I am using zookeeper 3.4.6 on both client (via vertx-zookeeper) and server. I see errors on clients where it complains that packet length is greater than 4MB.
java.io.IOException: Packet len4194374 is out of range!
at org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
"This is a server-side setting"
This statement is incorrect, jute.maxbuffer is evaluated on client as well by Record implementing classes that receive InputArchive. Each time a field is read and stored into an InputArchive the value is checked against jute.maxbuffer. Eg ClientCnxnSocket.readConnectResult
I investigated it in ZK 3.4
There is no chunking in the response.
This is a server-side setting. You will get this error if the entirety of the response is greater than the jute.maxbuffer setting. This response limit includes the list of children of znodes as well so even if subs does not have a lot of data but has enough children such that their length of their paths exceeds the max buffer size you will get the error.

Does HAProxy honor a 503?

I have an HAProxy acting as a load balancer for other boxes.
I know that when a box returns a response in the 500 range (on a health check), haproxy takes the box out of rotation.
What does it do if it (the proxy) gets a 503? (from a health check) 503s normally mandate a retry. Does it retry according to the Retry-After Header or does it take the box out of rotation?
If it retrys, does the header matter? In other words, if there is no Retry-After header, does it still honor the 503 and retry? or does it count that as a box error and remove the box from rotation?
Haproxy processes any 500 response as an error. https://code.google.com/p/haproxy-docs/wiki/httpchk
Only 200's and 300's are considered successes. All others are considered failures.
The answer to the second part of your question depends on how you have your health check intervals set. If you have them set to take the host of out rotation after 1 failure and the host returns a 503, then yes it will be removed from rotation. If you have it configure to require 2 failures and the host only returns 1 sequential 503 and then starts returning 200's then the host will stay in rotation.