increase the upload limit of HAProxy - haproxy

When using HAProxy, I've been getting the error 413: Request Entity Too Large
This error occurs when I'm trying to upload some files which are too large, however I can not find any documentation on how to increase this limit.
How can you increase the maximum upload limit to a specified amount of MB's?

This is not a HAProxy error, as you can see here http://cbonte.github.io/haproxy-dconv/configuration-1.7#1.3.1, 413 Error is not in the list.
So this probably an error returned from the server and HAProxy is just "forwarding" the error to the client.
To be 100% sure, you can see the logs:
An error returned by HAProxy:
127.0.0.1:35494 fe_main be_app/websrv1 0/0/-1/-1/3002 503 212 - - SC-- 0/0/0/0/3 0/0 "GET /test HTTP/1.1"
An error returned by the backend server:
127.0.0.1:39055 fe_main be_app/websrv2 0/0/0/0/0 404 324 - - --NI 1/1/0/1/0 0/0 "GET /test HTTP/1.1"
Notice the "-1" in the timers.

Related

request_length is getting set to zero in niginx

Recently our servers were facing status_code: 400 errors. We added logs to check what is happening.. these are our observations:
Log added in nginx.conf file:
log_format custom '$remote_addr STATUS_CODE:$status $request_length $bytes_sent REQUEST "$request" COOKIE "$http_cookie" Hello';
access_log /var/log/nginx/access.log custom;
We are observing that for 400 errors the below log is getting set:
xxx.xxx.xxx.xxx STATUS_CODE:400 0 326 REQUEST "GET www.xyz.abc" COOKIE "-"
Can you please help in finding what could be the possibility of request_length getting set to zero.

Getting 504 gateway timeout error when accessing node application through haproxy

I am facing following situations when configuring haproxy with node/express application. I am trying to
achieve following.
(https) (http)
browser ======> haproxy =====> node application
When loading the node application through the browser I am getting http 504 gateway time-out error.
Below is my haproxy configurtions.
haproxy configurations
Following are the haproxy logs.
vm-2 haproxy[21255]: 127.0.0.1:45948 [23/Dec/2019:10:57:51.411] https-in~ servers/server1 0/0/0/-1/100001 504 194 - - sH-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
vm-2 haproxy[21255]: 127.0.0.1:45948 [23/Dec/2019:10:57:51.411] https-in~ servers/server1 0/0/0/-1/100001 504 194 - - sH-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
vm-2 haproxy[21255]: 127.0.0.1:46122 [23/Dec/2019:10:59:31.435] https-in~ servers/server1 0/0/0/-1/100002 504 194 - - sH-- 1/1/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
Any help would be appreciated.
You're haproxy logs indicate that it's taking over 100 seconds (ie 100001/100002) for the request to complete and that it's being aborted (ie -1) before your backend server can send the full response.
If you're looking for a strictly haproxy solution (ie. you can't/won't tune your application) then you would need to play with haproxy timeout settings.
We faced the same problem, the client requests to the server were 504s sent by the HAProxy. We found out that the defaults configurations in the haproxy.cfg file had the timeout server property that defined the 504 response (setting it to a lower value, 1s in our case, would automatically result in a 504). Increasing that value is a way to have a longer connection between the proxy and the backend.

haproxy - layer 7 health check failure

I am getting occasional layer 7 health check failures. This happens on production machine seemingly at random, maybe once a minute or every few minutes on average. Here is the configuration:
backend api
mode http
option httpchk GET /api/v1/status HTTP/1.0
http-check expect status 200
balance roundrobin
server api1 127.0.0.1:8001 check fall 3 rise 2
server api2 127.0.0.1:8002 check fall 3 rise 2
The HAproxy log tells me the following:
Health check for server api/api2 failed, reason: Layer7 timeout, check duration: 10001ms, status: 2/3 UP.
Strange thing is when I run a script to fetch the same URL at a much faster pace than HAproxy, it never fails to return 200 response. It never hangs like it seems to do for HAproxy.
In addition, I'm getting occasional HAProxy error for various API calls, not just health checks, all looking quite similar:
https-in~ api/api1 45/0/0/-1/30045 504 194 - - sHVN 50/49/13/10/0 0/0 "POST /api/v1/accounts HTTP/1.1"
What could be the issue here? This one really got me stumped.

prometheus is not able to talk to influxDB

I am running prometheus as a kubernetes pod and wants prometheus to write data to the inflluxDB I have added the entries to the prometheus.yml , below entries been added
remote_read:
- url: "http://localhost:8086/api/v1/prom/write?u=xxxxxx&p=ids3pr0m&db=xxxxxx"
remote_write:
- url: "http://localhost:8086/api/v1/prom/read?u=xxxxxx&p=ids3pr0m&db=xxxxxx"
the pod is running file and able to read it , but keep on giving me below error .
time="2018-05-03T17:38:31Z" level=warning msg="Error sending 100 samples to remote storage: server returned HTTP status 400 Bad Request: {"error":"proto: wrong wireType = 2 for field StartTimestampMs"}" source="queue_manager.go:500"
time="2018-05-03T17:38:31Z" level=warning msg="Error sending 100 samples to remote storage: server returned HTTP status 400 Bad Request: {"error":"proto: wrong wireType = 2 for field StartTimestampMs"}" source="queue_manager.go:500"
time="2018-05-03T17:38:31Z" level=warning msg="Error sending 100 samples to remote storage: server returned HTTP status 400 Bad Request: {"error":"proto: wrong wireType = 2 for field StartTimestampMs"}" source="queue_manager.go:500"
Can someone help me on this
Ran into this as well and for me it was using a Prometheus version 2.x.
Looks InfluxDB only supports version 1.8

already initialized constant, and required twice

Hi, all,
i think this is a bug about the constant defined in sinatra, let's look my code.
route.rb
require 'sinatra'
get '/' do
C = "this is a test for constant"
"Hello World!"
end
Gemfile
source 'http://rubygems.org'
gem 'rack'
gem 'sinatra'
config.ru
require './route'
run Sinatra::Application
Starting the web server, we will see the below
$ rackup
[2011-10-08 19:54:36] INFO WEBrick 1.3.1
[2011-10-08 19:54:36] INFO ruby 1.9.2 (2011-07-09) [i686-linux]
[2011-10-08 19:54:36] INFO WEBrick::HTTPServer#start: pid=3268 port=9292
127.0.0.1 - - [08/Oct/2011 19:54:42] "GET / HTTP/1.1" 200 25 0.0059
127.0.0.1 - - [08/Oct/2011 19:54:42] "GET / HTTP/1.1" 200 25 0.0142
/home/zcdny/repo/test/route.rb:4: warning: already initialized constant C
127.0.0.1 - - [08/Oct/2011 19:54:43] "GET / HTTP/1.1" 200 25 0.0094
127.0.0.1 - - [08/Oct/2011 19:54:43] "GET / HTTP/1.1" 200 25 0.0098
/home/zcdny/repo/test/route.rb:4: warning: already initialized constant C
127.0.0.1 - - [08/Oct/2011 19:54:55] "GET / HTTP/1.1" 200 25 0.0003
127.0.0.1 - - [08/Oct/2011 19:54:55] "GET / HTTP/1.1" 200 25 0.0006
/home/zcdny/repo/test/route.rb:4: warning: already initialized constant C
127.0.0.1 - - [08/Oct/2011 19:54:56] "GET / HTTP/1.1" 200 25 0.0003
127.0.0.1 - - [08/Oct/2011 19:54:56] "GET / HTTP/1.1" 200 25 0.0005
Eidt
Fixing the file route.rb
require 'sinatra'
configure do
C = "this is a test for constant"
end
get '/' do
"Hello World!"
end
Now, the server no longer warning the constant be initialized.
But the log of server still appears double 'GET' require ,
i just want it requires for one to every client required, that is my question, How to solve it.
Thanks in advance.
What's wrong about that? If you define the constant twice (which happens if you have two GET requests or a GET and a HEAD request) then that warning will be displayed. You global variables instead. But if you don't have to, try to avoid global state at all cost, otherwise you might run in architectural issues (what if you want to serve more endpoints and globals clash) and make it hard to scale: if you rely on the internal state of a process, will you be able to serve the website from two processes? what about two machines?