Good day everyone!
I’m migrated from haproxy 1.5 to 1.7.11 and I have some troubles with logging
I have a following in config file for logging
capture request header Host len 200
capture request header Referer len 200
capture request header User-Agent len 200
capture request header Content-Type len 200
capture request header Cookie len 300
log-format %[capture.req.hdr(0),lower]\ %ci\ -\ [%t]\ \"%HM\ %HP\ %HV\"\ %ST\ \"%[capture.req.hdr(3)]\"\ %U\ \"%[capture.req.hdr(1)]\"\ \"%[capture.req.hdr(2)]\"\ \"%[capture.req.hdr(4)]\"\ %Tq\ \"%s\"\ 'NGINX-CACHE-- "-"'\ \"%ts\»
Logformat is almost the same with Nginx
But is some cases it works incorrectly
For example log output
Nov 20 10:41:56 lb.loc haproxy[12633]: example.com 81.4.227.173 - [20/Nov/2019:10:41:56.095] "GET /piwik.php H" 200 "-" 2396 "https://example.com/" "Mozilla/5.0" "some.cookie data" 19 "vm06.lb.loc" NGINX-CACHE-- "-" "—"
Problem is that "GET /piwik.php H" must be "GET /piwik.php HTTP/1.1"
its %HV parameter in log-format
A part of "HTTP/1.1" randomly cut’s off. It may be "HT" or "HTT" or "HTTP/1."
I think we have discussed this on the HAProxy mailing list.
https://www.mail-archive.com/haproxy#formilux.org/msg35426.html
There are some bug fixes in the buffer handling therefore please try to update to the latest 1.7.
As you mentioned on the HAProxy list that you use CentOS 6 and you use the packages from ius repo please install 1.7.12 which is listed on the page below.
https://repo.ius.io/6/x86_64/packages/h/
As described in documentation:
req.hdr(): [...] The function considers any comma as a delimiter for distinct values. If full-line headers are desired instead, use req.fhdr(). [...]
So, you should use req.fhdr() to have the full header value.
For example, like this:
http-request capture req.fhdr(User-Agent) len 256k
Information from issue thread in official repository.
Related
I have a website hosted served by IIS 10 on a Windows Server (2019) running Plesk. The site is mainly Classic ASP. I have a staging subdomain at staging.example.com, with the production site at www.example.com.
The two are fairly strictly separated, except that I don’t store image files, PDFs and such things on the staging server; I have a URL rewrite directive that redirects to the production site with a 302 status based on the URL not matching the following regex:
\.(php|asp|js|css|csv|json|htm|html|svg|svgz)(\?.+)?$
This generally works well: ASP pages are served from the staging site when the staging URL is called, but images on the page are pulled from the production site.
Except that there’s one ASP file which – for some reason – gives a 302 and redirects to the production site no matter what I do. The file exists in both locations. I’ve tested the URL in the pattern tester provided in the IIS URL-rewrite section, and it matches the pattern (meaning it shouldn’t redirect).
When I trace the request (that is, the initial request to the staging URL) in Firefox’s browser console, I get the following response headers (redacted):
HTTP/2 302 Found
cache-control: no-cache
content-type: text/html
location: https://www.example.com/path/to/file.asp
server: Microsoft-IIS/10.0
set-cookie: ASPSESSION****=********; secure; path=/
x-powered-by: ASP.NET
x-powered-by-plesk: PleskWin
date: Sun, 19 Dec 2021 18:52:05 GMT
content-length: 201
Accept
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding
gzip, deflate, br
Accept-Language
en-US,en;q=0.5
Authorization
Basic *************
Connection
keep-alive
Cookie
[cookies]
Host
staging.example.com
Referer
https://staging.example.com/path/to/file.asp
Sec-Fetch-Dest
document
Sec-Fetch-Mode
navigate
Sec-Fetch-Site
same-origin
Sec-Fetch-User
?1
Upgrade-Insecure-Requests
1
User-Agent
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:96.0) Gecko/20100101 Firefox/96.0
I’ve painstakingly gone through the entire file and all the file includes within it, and I can’t find any kind of Response.Redirect in any of them that might be responsible.
So it seems it’s IIS that’s redirecting with a 302… despite the fact that there doesn’t seem to be a directive that tells it to do this.
Is there a way to trace exactly what on the server is causing this 302 for one specific file? Some sort of tracing mechanism that tells me where the request gets passed on to before the 302 response is returned?
Update 26 Dec
Based on samwu’s comment, I’ve enabled Failed Request Tracing for the page, and looking through the resulting .frb file, it’s clear that none of the rewrite conditions are met – they all have succeed: false. It seems the redirect is not happening in the WWW Server at all, in fact, but in the ISAPI extension. This is the only place that the production site URL is mentioned at all in the request trace (except of course in the GENERAL_RESPONSE_HEADER section at the very end):
ISAPI_START
MODULE_SET_RESPONSE_SUCCESS_STATUS ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus="302", HttpReason="Object moved"
GENERAL_SET_RESPONSE_HEADER HeaderName="Location", HeaderValue="https://www.example.com/path/to/file.asp", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Length", HeaderValue="201", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Type", HeaderValue="text/html", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Cache-control", HeaderValue="no-cache", Replace="false"
NOTIFY_MODULE_COMPLETION ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", fIsPostNotificationEvent="false", CompletionBytes="0", ErrorCode="The operation completed successfully. (0x0)"
ISAPI_END
In the ISAPI Filters section in IIS Manager, there are four filters: a 32-bit and a 64-bit version for ASP.Net 2.0 and the same for ASP.Net 4.0, all called aspnet_filter.dll. I’m guessing these are standard filters – I know for certain, at least, that we haven’t mucked about with any ISAPI filters at all.
As should be obvious by now, I’m not really a server admin, and ISAPI filters are definitely above my level of knowledge.
So how do I proceed from here? How do I figure out why ISAPI is redirecting?
I'm trying to get an AWS APIGateway implementation going, and am trying to send a request from the HTTPie module rather than from Postman. It works perfectly from Postman, but HTTPie doesn't seem to work for me, and only throws a 307 Temporary Redirect.
Using the following command:
http POST {userid}.execute-api.ap-southeast-2.amazonaws.com/sqstest/message name=john
Outputs:
HTTP/1.1 307 Temporary Redirect
Connection: keep-alive
Content-Length: 185
Content-Type: text/html
Date: Mon, 16 Apr 2018 06:28:24 GMT
Location: https://{userid}.execute-api.ap-southeast-2.amazonaws.com/sqstest/message
Server: CloudFront
Via: ################(CloudFront)
X-Amz-Cf-Id: ######################
X-Cache: Redirect from cloudfront
I did notice that Content-Type was text/html, which was odd considering I needed to send a json - but no matter what variant of the command I tried, it would still return the same results.
From my understanding it should work the same as Postman as long as the headers are the same (they are minus the content-type, which doesn't change even if I define it using -j/--json).
Any help?
Cheers.
After a few hours of trial-and-error, determined that the error was in syntax.
Required a https:// on the command and to state it as json it needed a semicolon (:).
For example:
http POST https://{userid}.execute-api.ap-southeast-2.amazonaws.com/sqstest/message name:=john
As opposed to the statement in the question.
In python (and my browser), I am able to send a request to https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0 and get a response, as expected, but with Lua, I get HTTP/1.1 301 Moved Permanently. Here is what I have tried so far:
http = require("socket.http");
print(http.request("https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0")
which outputs an HTTP error page (moved permanently) and
301 table: 0x8f32470 http/1.1 301 Moved Permanently
the table's contents are:
location https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0
content-type text/html
server nginx/1.10.0 (Ubuntu)
content-length 194
connection close
date Mon, 11 Dec 2017 01:41:35
Why does only Lua get this error? If I request to google, I get the google home page HTML. If I request to status.mojang.com, I get the mojang server statuses in a JSON response string, so the socket is functional for certain.
It's because you are using socket.http to request a page from https URL; since socket.http doesn't handle https, it sends the request to port 80, which gets forwarded to https URL, but socket library doesn't follow that redirect, as it doesn't "know" what to do with https, so it simply reports 301.
You need to install and use luasec and use ssl.https instead of socket.http, which will make it work.
I'm looking for a solution with redirects to another domain if the response from HTTP server was 404.
acl not_found status 404
acl found_ceph status 200
use_backend minio_s3 rsprep ^HTTP/1.1\ 404\ (.*)$ HTTP/1.1\ 302\ Found\nLocation:\ / if not_found
use_backend ceph if found_ceph
But still not working, this rule goes to minio_s3 backend.
Thank you for you advice.
When the response from this backend has status 404, first add a Location header that will send the browser to example.com with the original URI intact, then set the status code to 302 so the browser executes a redirect.
backend my-backend
mode http
server my-server 203.0.113.113:80 check inter 60000 rise 1 fall 2
http-response set-header Location http://example.com%[capture.req.uri] if { status eq 404 }
http-response set-status 302 if { status eq 404 }
Test:
$ curl -v http://example.org/pics/funny/cat.jpg
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to example.org (127.0.0.1) port 80 (#0)
> GET /pics/funny/cat.jpg HTTP/1.1
> User-Agent: curl/7.35.0
> Host: example.org
> Accept: */*
The actual back-end returns 404, but we don't see it. Instead...
< HTTP/1.1 302 Moved Temporarily
< Last-Modified: Thu, 04 Aug 2016 16:59:51 GMT
< Content-Type: text/html
< Content-Length: 332
< Date: Sat, 07 Oct 2017 00:03:22 GMT
< Location: http://example.com/pics/funny/cat.jpg
The response body from the back-end's 404 error page will still be sent to the browser, but -- as it turns out -- the browser will not display it, so no harm done. This requires HAProxy 1.6 or later.
#Michael's answer is rather good, but isno't working for me for two reasons:
Mainly because the %[capture.req.uri] tag resolves to empty (HA Proxy 1.7.9 Docker image)
Also due to the fact that the original assumptions are incomplete, due to the fact that the frontend section is missing...
So I struggled for a while, as you find all kinds of answers on the Internet, between those guys who swear the 404 logic should be put in the frontend, vs those who choose the backend, and any possible kind of tags...
This is my answer, which works for me.
My use case is that if an image is not found on the backend behind HA Proxy, then an S3 bucket is checked.
The entry point is: https://myhostname:8080/path/to/image.jpeg
defaults
mode http
global
log 127.0.0.1:514 local0 debug
frontend come_on_over_here
bind :8080
# The following two lines are here to save values while we have access to them. They won't be available in the backend section.
http-request set-var(txn.path) path
http-request set-var(txn.query) query
http-request replace-value Host localhost:8080 dev.local:80
default_backend onprems_or_s3_be
backend onprems_or_s3_be
log global
acl path_photos var(txn.path) -m beg /path/prefix/i/want/to/strip/off
acl p_ext_jpeg var(txn.path) -m end .jpeg
acl is404 status eq 404
http-response set-header Location https://mybucket.s3.eu-west-3.amazonaws.com"%[var(txn.path),regsub(^/path_prefix_i_want_to_strip_off/,/)]?%[var(txn.query)]" if path_photos p_ext_jpeg is404
http-response set-status 301 if is404
server onprems_server dev.local:80 check
I have HAproxy is behind an AWS ELB. As soon as i remove the ELB, i can get the custom error page. but, with ELB in the front of Haproxy, i get HTTP/1.1 504 GATEWAY_TIMEOUT Content-Length: 0 Connection: keep-alive.
Can anyone tell me what is going on please? Thanks
errorfile:
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>403 Forbidden</h1>
Request forbidden by administrative rules.
</body></html>
Me and a coworker just had the same problem. After receiving the timeouts, and reading amazon definition for this type of http code, i got in my head my error file was "malformed". After a lot of trys, we managed to discover that there is something funny with the CL-RF (new lines) on the error file "header".
I downloaded HaProxy default file from their git (https://raw.githubusercontent.com/haproxy/haproxy/60220bbc4b6b3c4279d3c96232cf2c2461ecc55e/examples/errorfiles/503.http) and when you open it on vi(m) it has a ^M (CR) sign on the headers(everything before the body, including the empty line separating them). If you cant download it, you could just write it (just the top part) on wordpad or something like it (dos) and then send it to you unix machine.
So i wrote my on file using their header and now everything works fine.
Cheers.