I have a strange status code in my log file of haproxy (Note that its not a customized log-format its the default one in log-http)
43.56.77.23:55309 [27/Oct/2015:20:14:34.749] front-http mybackend/app 349/0/-1/-1/359 **-1** 0 - - CC-- 1658/1658/21/21/0 0/0 "GET /img/button_bkg.png HTTP/1.1"
What does the -1 status code mean, i tried to find the solution online but unfortunately i could not find anything that resembles my problem.
Does anyone knows what this status code means?
-1 indicates that the status code is not available. The reason is in the termination flags field.
See section 8.5 in the docs.
Related
Good day everyone!
I’m migrated from haproxy 1.5 to 1.7.11 and I have some troubles with logging
I have a following in config file for logging
capture request header Host len 200
capture request header Referer len 200
capture request header User-Agent len 200
capture request header Content-Type len 200
capture request header Cookie len 300
log-format %[capture.req.hdr(0),lower]\ %ci\ -\ [%t]\ \"%HM\ %HP\ %HV\"\ %ST\ \"%[capture.req.hdr(3)]\"\ %U\ \"%[capture.req.hdr(1)]\"\ \"%[capture.req.hdr(2)]\"\ \"%[capture.req.hdr(4)]\"\ %Tq\ \"%s\"\ 'NGINX-CACHE-- "-"'\ \"%ts\»
Logformat is almost the same with Nginx
But is some cases it works incorrectly
For example log output
Nov 20 10:41:56 lb.loc haproxy[12633]: example.com 81.4.227.173 - [20/Nov/2019:10:41:56.095] "GET /piwik.php H" 200 "-" 2396 "https://example.com/" "Mozilla/5.0" "some.cookie data" 19 "vm06.lb.loc" NGINX-CACHE-- "-" "—"
Problem is that "GET /piwik.php H" must be "GET /piwik.php HTTP/1.1"
its %HV parameter in log-format
A part of "HTTP/1.1" randomly cut’s off. It may be "HT" or "HTT" or "HTTP/1."
I think we have discussed this on the HAProxy mailing list.
https://www.mail-archive.com/haproxy#formilux.org/msg35426.html
There are some bug fixes in the buffer handling therefore please try to update to the latest 1.7.
As you mentioned on the HAProxy list that you use CentOS 6 and you use the packages from ius repo please install 1.7.12 which is listed on the page below.
https://repo.ius.io/6/x86_64/packages/h/
As described in documentation:
req.hdr(): [...] The function considers any comma as a delimiter for distinct values. If full-line headers are desired instead, use req.fhdr(). [...]
So, you should use req.fhdr() to have the full header value.
For example, like this:
http-request capture req.fhdr(User-Agent) len 256k
Information from issue thread in official repository.
I am getting occasional layer 7 health check failures. This happens on production machine seemingly at random, maybe once a minute or every few minutes on average. Here is the configuration:
backend api
mode http
option httpchk GET /api/v1/status HTTP/1.0
http-check expect status 200
balance roundrobin
server api1 127.0.0.1:8001 check fall 3 rise 2
server api2 127.0.0.1:8002 check fall 3 rise 2
The HAproxy log tells me the following:
Health check for server api/api2 failed, reason: Layer7 timeout, check duration: 10001ms, status: 2/3 UP.
Strange thing is when I run a script to fetch the same URL at a much faster pace than HAproxy, it never fails to return 200 response. It never hangs like it seems to do for HAproxy.
In addition, I'm getting occasional HAProxy error for various API calls, not just health checks, all looking quite similar:
https-in~ api/api1 45/0/0/-1/30045 504 194 - - sHVN 50/49/13/10/0 0/0 "POST /api/v1/accounts HTTP/1.1"
What could be the issue here? This one really got me stumped.
When using HAProxy, I've been getting the error 413: Request Entity Too Large
This error occurs when I'm trying to upload some files which are too large, however I can not find any documentation on how to increase this limit.
How can you increase the maximum upload limit to a specified amount of MB's?
This is not a HAProxy error, as you can see here http://cbonte.github.io/haproxy-dconv/configuration-1.7#1.3.1, 413 Error is not in the list.
So this probably an error returned from the server and HAProxy is just "forwarding" the error to the client.
To be 100% sure, you can see the logs:
An error returned by HAProxy:
127.0.0.1:35494 fe_main be_app/websrv1 0/0/-1/-1/3002 503 212 - - SC-- 0/0/0/0/3 0/0 "GET /test HTTP/1.1"
An error returned by the backend server:
127.0.0.1:39055 fe_main be_app/websrv2 0/0/0/0/0 404 324 - - --NI 1/1/0/1/0 0/0 "GET /test HTTP/1.1"
Notice the "-1" in the timers.
I'd like to be able to have two sets of slides, produced from two different notebooks, open at once in my browser. This use case is not supported, as far as I can tell, by the option --post serve of ipython nbconvert --to slides (of course, I'd be happy to be disproved).
My tactic has been to start a local server, as in
python -m SimpleHTTPServer 8001
and open the slide shows like this
google-chrome http://127.0.0.1:8001/my.slides.html
but now I get a bunch of messages alike
127.0.0.1 - - [31/Mar/2015 12:03:49] code 404, message File not found
127.0.0.1 - - [31/Mar/2015 12:03:49] "GET /reveal.js/css/reveal.css HTTP/1.1" 404 -
whose meaning is quite clear to me... so I did
ln -s /path/to/local/copy/of/reveal.js/ .
google-chrome http://127.0.0.1:8001/`
but now I have
127.0.0.1 - - [31/Mar/2015 12:07:29] code 404, message File not found
127.0.0.1 - - [31/Mar/2015 12:07:29] "GET /custom.css HTTP/1.1" 404 -
examining the source of my.slides.html I see the lines
<!-- Custom stylesheet, it must be in the same directory as the html file -->
<link rel="stylesheet" href="custom.css">
so I'm bound to the conclusion that --post serve does an awful lot of things at my back and that I'm out of luck in my attempt to save a standalone slide-show and have it served by a local HTTP server.
How can I have a properly served slide show without resorting to --post serve?
I am trying to run this example from the book Sinatra: Up and Running p. 46, but can't get it to work. Here's the program code:
require 'sinatra'
before do
content_type :txt
end
connections = []
get '/consume' do
stream(:keep_open) do |out|
# store connection for later on
connections << out
# remove connection when closed properly
out.callback { connections.delete(out) }
# remove connection when closed due to an error
out.errback do
logger.warn 'we just lost a connection!'
connections.delete(out)
end
end
end
get '/broadcast/:message' do
connections.each do |out|
out << "#{Time.now} -> #{params[:message]}" << "\n"
end
"Sent #{params[:message]} to all clients."
end
The instructions for testing the code are as follows:
It’s a little tricky to demonstrate the behavior in text, but a good demonstration would
be to start the application, then open a web browser and navigate to http://localhost:
4567/consume. Next, open a terminal and use cURL to send messages to the server.
$ curl http://localhost:4567/broadcast/hello
Sent hello to all clients.
If you look back at the web browser, you should see that the content of the page has
been updated with a time stamp and the message that you sent via the terminal. The
connection remains open, and the client continues to wait for further information from
the server.
When I follow these instruction, I get no errors, but the message "hello" does not appear in the browser. I am running Sinatra on with Webrick. Why is it not working?
Thanks!
UPDATE (Konstantin's Thin Suggestion)
I now start thin and perform the two steps described in the book and the OP. You can see that thin does indeed receive both requests. However, I am still not seeeing the output "hello" in the browser.
>rackup
>> Thin web server (v1.4.1 codename Chromeo)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:9292, CTRL+C to stop
127.0.0.1 - - [13/Aug/2012 12:48:03] "GET /consume HTTP/1.1" 200 - 0.0900
127.0.0.1 - - [13/Aug/2012 12:48:03] "GET /favicon.ico HTTP/1.1" 404 447 0.0000
127.0.0.1 - - [13/Aug/2012 12:49:02] "GET /broadcast/hello HTTP/1.1" 200 26 0.00
00
127.0.0.1 - - [13/Aug/2012 12:57:00] "GET /consume HTTP/1.1" 200 - 0.0000
Perhaps the mistake is in my configu.ru file:
require './streaming.rb'
run Sinatra::Application
Run Sinatra on Thin. :keep_open is not supported on Webrick. Make sure you're running Sinatra 1.3.3 or later.
I had the same problem. To speed up the response, I used
before do
content_type 'text/event-stream'
end
The second route has to be a POST:
post '/broadcast/:message' do
connections.each do |out|
out << "#{Time.now} -> #{params[:message]}" << "\n"
end
"Sent #{params[:message]} to all clients."
end
After that, you will also have to POST your message to the server:
curl -vX POST 127.0.0.1:4567/broadcast/hello