HAProxy url_beg/path_beg doesnt seem to work [closed] - haproxy

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
My haproxy.cfg is this
frontend go 127.0.0.1:8081
timeout client 86400000
acl ddos_log path_beg /ddoslogger/
use_backend ddos_backend if ddos_log
use_backend normal_backend if !ddos_log
backend ddos_backend
mode http
option httplog
balance uri
# Will add more servers if this works
server go11 localhost:8083 check
server go11 localhost:8083 backup
backend normal_backend
mode http
option httplog
option allbackups
default-server weight 50 slowstart 30s inter 3s fastinter 2s downinter 5s
server go10 localhost:8082 check
server go10 localhost:8082 backup
What I planned to do basically was for all queries to "/ddoslogger/", I will use the balance uri method to select server, and for others use a different load balancing approach. Both talk to the same set of servers (I have removed others for debugging purposes)
Heres what I get when I made a request to haproxy in debug mode
00000000:go.accept(0004)=0006 from [127.0.0.1:58054]
00000000:normal_backend.clireq[0006:ffff]: POST /ddoslogger/a01324jlkas HTTP/1.1
00000000:normal_backend.clihdr[0006:ffff]: Host: localhost:8081
00000000:normal_backend.clihdr[0006:ffff]: Connection: keep-alive
00000000:normal_backend.clihdr[0006:ffff]: Content-Length: 204
00000000:normal_backend.clihdr[0006:ffff]: Cache-Control: no-cache
00000000:normal_backend.clihdr[0006:ffff]: Origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm
00000000:normal_backend.clihdr[0006:ffff]: User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36
00000000:normal_backend.clihdr[0006:ffff]: Content-Type: application/x-www-form-urlencoded
00000000:normal_backend.clihdr[0006:ffff]: Accept: */*
00000000:normal_backend.clihdr[0006:ffff]: Accept-Encoding: gzip,deflate,sdch
00000000:normal_backend.clihdr[0006:ffff]: Accept-Language: en-US,en;q=0.8
00000000:normal_backend.clihdr[0006:ffff]: Cookie: <truncated>
00000000:normal_backend.srvrep[0006:0007]: HTTP/1.1 200 OK
00000000:normal_backend.srvhdr[0006:0007]: Content-Type: text/plain; charset=utf-8
00000000:normal_backend.srvhdr[0006:0007]: Content-Length: 0
00000000:normal_backend.srvhdr[0006:0007]: Date: Thu, 19 Sep 2013 03:55:19 GMT
Any Suggestions for what I'm doing wrong?

Fixed this by moving
mode http
option httplog
to the frontend service, so it looks like this now:
frontend go 127.0.0.1:8081
timeout client 86400000
mode http
option httplog
acl ddos_log path_beg /ddoslogger/
use_backend ddos_backend if ddos_log
use_backend normal_backend if !ddos_log
.
.
.
In case others are struggling with this!

Related

Rate Limiting using HAProxy with Large Post Requests

I am using HAProxy v2.0.13 in front of an API and have attempted to implement URL based rate limiting to try and limit connections to 5 within a 30 minute sliding window per source IP for the "/get_link" path:
frontend fe_dev
mode http
bind *:8081,[::]:8081
stick-table type ip size 100k expire 30m store http_req_rate(30m)
http-request track-sc0 src if METH_POST { path -i -m beg /get_link }
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 5 }
default_backend be_dev
This API endpoint is called from a JavaScript function using an XMLHttpRequest() request and I am using Google Chrome v83.
var xHR = new XMLHttpRequest();
xHR.open("POST", "get_link", true);
xHR.onload = function() {
console.log('status code is ' + this.status);
};
xHR.onerror = function() {
console.log("onerror()");
};
var obj = {};
xHR.setRequestHeader("Content-Type", "application/json");
xHR.send(JSON.stringify(obj));
When the size of my POST request is small (i.e. a few hundred bytes) then everything works fine - after 5 requests I start getting HTTP 429 returned. I then tried with a large POST request (the content length was around 35500 bytes) and this is when Chrome started to trigger the onerror function.
I have done a tcpdump and it looks like HAProxy doesn't wait for the whole request before sending back a 429 (output trimmed for brevity):
POST /get_link HTTP/1.1
Host: server:8081
Connection: keep-alive
Content-Length: 35687
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
Content-Type: application/json
Accept: */*
Origin: http://server:8081
Referer: http://server:8081/index.html
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
{"req1":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXHTTP/1.1 429 Too Many Requests
content-length: 117
cache-control: no-cache
content-type: text/html
connection: close
<html><body><h1>429 Too Many Requests</h1>
You have sent too many requests in a given amount of time.
</body></html>
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
From looking at tcpdump I can also see that HAProxy sends a TCP RST as soon as it has sent back the 429 even though Chrome is still sending POST data. How do I get HAProxy to play nicely and wait until it has received the whole request before rejecting it?
The answer that no one came up with is enabling “option http-buffer-request”.

Sometimes getting 405 when using Prometheus from Grafana

I'm using a Prometheus datasource from Grafana, and I'm sometimes getting 200 OK, sometimes getting 405 Method not allowed when looking at graphs, or inserting new graphs.
It is very strange that it only appears sometimes, for random graphs, sometimes only for some graphs in a single dashboard.
The datasource is set up to proxy requests through the backend.
Both Grafana and Prometheus are running in Kubernetes as StatefulSets in Google cloud.
I'm accessing Grafana at localhost:3000 through an SSH tunnel to the pod in Kubernetes, and Grafana is accessing Prometheus at http://prometheus:9090/.
I've tried changing the method from GET to POST in the datasource setup, but then I get 405 on every request.
The raw headers in the request for http://localhost:3000/api/datasources/proxy/1/api/v1/query_range?query=kafka_topic_highwater{topic="test"}&start=1541499015&end=1541499930&step=15 is
Host: localhost:3000
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:63.0) Gecko/20100101 Firefox/63.0
Accept: application/json, text/plain, */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://localhost:3000/d/tDB6XEaiz/kafka-realtime-timeseries?orgId=1
X-Grafana-Org-Id: 1
DNT: 1
Connection: keep-alive
Cookie: grafana_user=admin; grafana_remember=asdf8a620; grafana_sess=<secret>
And the response is:
HTTP/1.1 405 Method Not Allowed
Cache-Control: no-cache
Content-Length: 19
Content-Type: text/plain; charset=utf-8
Date: Tue, 06 Nov 2018 10:25:22 GMT
Expires: -1
Pragma: no-cache
X-Content-Type-Options: nosniff
Any ideas what might be causing this?
The problem was that I had two Prometheus instances running in the same cluster, with the same service name, so that requests were distributed across them... One of them replied with 405 because it was set up to forward metrics directly to StackDriver..

How to rewrite paths with Traefik when using path prefix rules?

My Traefik config for WordPress contains the following docker-labels:
- "traefik.backend=wordpress"
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:MyHostName.net;PathPrefix:/blog"
- "traefik.enable=true"
- "traefik.port=80"
Now requesting the url "https://MyHostName/blog" seems to reach the service which seems to return a redirect to "https://MyHostName/wp-admin...".
I cannot use subdomains.
How can I solve this?
UPDATE 0
First thing to do should be adding the Filter "PathPrefixStrip:/blog" to remove the "/blog" prefix when forwarding the request to the service. Correct?
But how do I modify (for example) a redirect request to add the prefix "/blog" to the redirect URL?
UPDATE 1
At https://github.com/containous/traefik/issues/985 my question is "discussed" and a solution seems to be merged (https://github.com/containous/traefik/pull/1442).
In short: Stripped prefixes will be added as the respective header (X-Forwarded-Prefix).
I will check that and write down the results here.
Additional resources:
Routing paths with Traefik
Is there an equivalent to ReverseProxyPass for Apache in Traefik?
UPDATE 2
Now I created a request looking like this:
https://MYHOSTNAME/blog
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: de,en-US;q=0.7,en;q=0.3
Connection: keep-alive
Cookie: ocuvhr6ala6i=d2cd9020839889a752b4375a63dedad0; oc_sessionPassphrase=qJu13Q%2FlAoSsv5b0qC18Re%2BcrcML6o32c2XuDJEGViIMI4uERIf%2Bs77DvFbMSkEBkZs%2Bn%2FfnUjdB9APvk4zq2qlj6AiDXX2CGYf31MPVci8HkgcsXFcpL7cRLBbRGRWS; __Host-nc_sameSiteCookielax=true; __Host-nc_sameSiteCookiestrict=true
Host: MYHOSTNAME
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
The "PathPrefixStrip" seems to work in the direction CLIENT>>SERVICE. This is what my traefik log contains:
traefik | time="2018-04-04T18:12:54Z" level=debug msg="vulcand/oxy/roundrobin/rr: competed ServeHttp on request" Request="
{
"Method":"GET",
"URL":{
"Scheme":"",
"Opaque":"",
"User":null,
"Host":"",
"Path":"/",
"RawPath":"",
"ForceQuery":false,
"RawQuery":"",
"Fragment":""
},
"Proto":"HTTP/2.0",
"ProtoMajor":2,
"ProtoMinor":0,
"Header":{
"Accept":[
"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
],
"Accept-Encoding":[
"gzip, deflate, br"
],
"Accept-Language":[
"de,en-US;q=0.7,en;q=0.3"
],
"Cookie":[
"ocuvhr6ala6i=d2cd9020839889a752b4375a63dedad0; oc_sessionPassphrase=qJu13Q%2FlAoSsv5b0qC18Re%2BcrcML6o32c2XuDJEGViIMI4uERIf%2Bs77DvFbMSkEBkZs%2Bn%2FfnUjdB9APvk4zq2qlj6AiDXX2CGYf31MPVci8HkgcsXFcpL7cRLBbRGRWS; __Host-nc_sameSiteCookielax=true; __Host-nc_sameSiteCookiestrict=true"
],
"Upgrade-Insecure-Requests":[
"1"
],
"User-Agent":[
"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"
],
"X-Forwarded-Prefix":[
"/blog"
]
},
"ContentLength":0,
"TransferEncoding":null,
"Host":"MYHOSTNAME",
"Form":null,
"PostForm":null,
"MultipartForm":null,
"Trailer":null,
"RemoteAddr":"81.128.35.176:33468",
"RequestURI":"/",
"TLS":null
}
"
But the redirection answer looks as follows in my browser:
HTTP/2.0 302 Found
cache-control: no-cache, must-revalidate, max-age=0
content-length: 0
content-type: text/html; charset=UTF-8
date: Wed, 04 Apr 2018 18:44:18 GMT
expires: Wed, 11 Jan 1984 05:00:00 GMT
location: https://MYHOSTNAME/wp-admin/install.php
server: Apache/2.4.25 (Debian)
X-Firefox-Spdy: h2
x-powered-by: PHP/7.2.2
So the redirect-response does not contain any information about the stripped path prefix "/blog".
UPDATE 3
At the end it looks like a problem of the served software inside the container that does not handle the header.
Additional resources:
Wordpress & Nginx with Docker: Static files not loaded
Any ideas?
Since v2.0, Traefik doesn't support PathPrefixStrip anymore, you need to use a middleware as specified in this article : https://doc.traefik.io/traefik/migration/v1-to-v2/#strip-and-rewrite-path-prefixes 😊
Maybe you should add all possible values in your PathPrefixStrip: / blog rule eg.
PathPrefixStrip: /blog,/wp-admin,/abc,/xyz
In many cases, it works for standard routes. The biggest problem is when your backend service does not listen to requests in the root / but in some sub-dir /something/index.html and that sub-dir takes resources from the root /.

HAProxy 1.4: how to replace X-Forwarded-For with custom IP

I have an HAProxy 1.4 server behind an AWS ELB. Logically, the ELB sends the users IP in the X-Forwarded-For header. My app reads that header and behaves differently based on the IP (country).
I want to test that behavior overriding the X-Forwarded-For with custom IPs, but the AWS ELB appends my custom value with my current IP (X-Forwarded-For: 1.2.3.4, 200.1.130.2)
What I have been trying to do is to send another custom header X-Force-IP and once it gets into HAproxy, delete X-Forwarded-For headers and use reqirep to change the name X-Force-IP to X-Forwarded-For
This is how my config chunk looks like
acl custom-ip hdr_cnt(X-Force-IP) 1
reqidel ^X-Forwarded-For:.* if custom-ip
reqrep X-Force-IP X-Forwarded-For if custom-ip
but when it gets into my app, the app server (lighttpd) rejects it with "HTTP 400 Bad Request" as if it were malformed.
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" "http://www.example.com"
HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=mcs0tqlsg31haiavqopdvm02i6; path=/; domain=www.example.com
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Sun, 11 Jan 2015 02:57:34 GMT
Server: beta
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" -H "X-Force-IP: 321.456.7.12" "http://www.example.com"
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Date: Sun, 11 Jan 2015 02:57:44 GMT
Server: beta
From the previous it looks like the ACL is working.
I checked with tcpdump in the app server and it seems that it has deleted the X-Forwarded-For header but also deleted the X-Force-IP instead of replacing it.
[ec2-user#beta ~]# sudo tcpdump -A -s 20240 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep --line-buffered "^........(GET |HTTP\/|POST |HEAD )|^[A-Za-z0-9-]+: " | sed -r 's/^........(GET |HTTP\/|POST |HEAD )/\n\1/g'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 20240 bytes
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
Connection: close
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Connection: close
Date: Sun, 11 Jan 2015 02:56:50 GMT
Server: beta
The previous was with the X-Force-IP, and the following without it:
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
X-Forwarded-For: 123.456.7.12
Connection: close
HTTP/1.1 200 OK
X-Powered-By: PHP/5.3.4
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Connection: close
Transfer-Encoding: chunked
Date: Sun, 11 Jan 2015 02:57:02 GMT
Server: beta
^C71 packets captured
71 packets received by filter
0 packets dropped by kernel
Any help?
I was expecting to have "X-Force-IP: 321.456.7.12" converted into "X-Forwarded-For: 321.456.7.12"
Thanks!
Ignacio
The regex matching provided here doesn't do simple substitution. It's quite a bit more powerful, and has to be used accordingly.
reqrep ^X-Force-IP:(.*) X-Forwarded-For:\1 if custom-ip
The reqrep (case sensitive request regex replace) and reqirep (case insensitive request regex replace) directives operate at the individual request header level, replacing the header name and its value with the 2nd argument, if the 1st argument matches... so if there's information you want to preserve (such as the value) you need one or more capture groups, such as (.*), in the 1st arg, and a placeholder \1 in the 2nd arg, in order to do the preserve the data.
Your current configuration does indeed invalidate the request, by creating a malformed/incomplete header line.
Also, you should anchor the pattern to the left side of the header name with ^. Otherwise, the expression could match more headers than you expect.

Fiddler not capturing WCF traffic from the web server to the application server

I have two possible flows:
ConsoleClient -(1)-> ApplicationServer
or
SilverlightClient -(2)-> WebServer -(3)-> ApplicationServer
Fiddler successfully captures the HTTP traffic on the (1) and the (2), but not on the (3). Here is a sample capture on (1):
POST /WcfDemo/ws HTTP/1.1
Content-Type: application/soap+xml; charset=utf-8
Host: il-mark-lt
Content-Length: 521
Expect: 100-continue
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"><s:Header><a:Action s:mustUnderstand="1">http://tempuri.org/IWcfDemoService/Add</a:Action><a:MessageID>urn:uuid:d7fde351-12fd-4872-bc26-52ff97f126e9</a:MessageID><a:ReplyTo><a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address></a:ReplyTo><a:To s:mustUnderstand="1">http://il-mark-lt/WcfDemo/ws</a:To></s:Header><s:Body><Add xmlns="http://tempuri.org/"><x>4</x><y>5</y></Add></s:Body></s:Envelope>
HTTP/1.1 200 OK
Content-Length: 399
Content-Type: application/soap+xml; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
Date: Sat, 17 Sep 2011 20:57:16 GMT
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"><s:Header><a:Action s:mustUnderstand="1">http://tempuri.org/IWcfDemoService/AddResponse</a:Action><a:RelatesTo>urn:uuid:d7fde351-12fd-4872-bc26-52ff97f126e9</a:RelatesTo></s:Header><s:Body><AddResponse xmlns="http://tempuri.org/"><AddResult>9</AddResult></AddResponse></s:Body></s:Envelope>
And here is an example of (2):
POST /WcfDemoService.svc/ws HTTP/1.1
Host: localhost:56970
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json
Accept-Language: fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.6,he-IL;q=0.5,he;q=0.4,ru-RU;q=0.3,ru;q=0.1
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection: keep-alive
Referer: http://localhost:56970/ClientBin/SilverlightClient.xap
Content-Length: 581
Content-Type: application/soap+xml; charset=utf-8
<s:Envelope xmlns:a="http://www.w3.org/2005/08/addressing" xmlns:s="http://www.w3.org/2003/05/soap-envelope"><s:Header><a:Action s:mustUnderstand="1">http://tempuri.org/IWcfDemoService2/Add</a:Action><a:MessageID>urn:uuid:e8420d3e-f568-49ce-bfc7-5631d5bf3fd0</a:MessageID><a:ReplyTo><a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address></a:ReplyTo><a:To s:mustUnderstand="1">http://localhost:56970/WcfDemoService.svc/ws</a:To></s:Header><s:Body><Add xmlns="http://tempuri.org/"><x>11</x><y>22</y><serverChannelKind>ws</serverChannelKind></Add></s:Body></s:Envelope>
HTTP/1.1 200 OK
Server: ASP.NET Development Server/10.0.0.0
Date: Sat, 17 Sep 2011 20:59:23 GMT
X-AspNet-Version: 4.0.30319
Content-Length: 401
Cache-Control: private
Content-Type: application/soap+xml; charset=utf-8
Connection: Close
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"><s:Header><a:Action s:mustUnderstand="1">http://tempuri.org/IWcfDemoService2/AddResponse</a:Action><a:RelatesTo>urn:uuid:e8420d3e-f568-49ce-bfc7-5631d5bf3fd0</a:RelatesTo></s:Header><s:Body><AddResponse xmlns="http://tempuri.org/"><AddResult>33</AddResult></AddResponse></s:Body></s:Envelope>
Now, I am absolutely sure the (3) does get through. So, it all boils down to some misconfiguration on the WebServer, but I cannot nail it. The Web server is just a trivial ASP.NET application hosted within IIS. It even has the following lines in the web.config:
<system.net>
<defaultProxy>
<proxy bypassonlocal="false" usesystemdefault="true" />
</defaultProxy>
</system.net>
Still, this does not work.
To further strengthen my suspicion on the web server configuration, I have checked the SilverlightClient --> ApplicationServer flow and it is captured just fine.
I am using the Asp.Net development server.
Edit
Running procmon reveals that the following suspicious registry key is consulted (amongst others):
HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\ProxyBypass
And on my machine it was set to 1. I have changed it to 0 and seems like it solved my issue. The only problem is that when I change it back to 1 Fiddler continues to capture the problematic leg! Very interesting.
Anyway, I am satisfied, for now.
You are calling "localhost" right?
Fiddler is not able to capture the local traffic if you are using "localhost" as hostname.
Solutions:
Use servername (e.g. myserver)
Use ip4.fiddler (e.g. http://ipv4.fiddler:8787)
Not sure if these are causing it ... but,
A few things to check:
In IIS7 the appPool has a loadUserProfile setting. It causes the session to load a user profile which means it can get system proxy settings.
Check the code making the request from the webServer - even if you configure to use the system proxy and bypass onLocal (which only applies to names without dots in it), code making the request can still explicitly set to use or not to use a proxy.
Far fetched but you may want to play with the account the appPool runs as - local account with profile vs. Network Service.
Hope that helps - these network things have a lot of variables between two points :)