How does maxRequestsPerConnection of istio work? - kubernetes

everyone.
I have been learning istio and to understand how maxRequestsPerConnection works, I applied the manifest below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
httpbin is a sample service of istio.
I thought maxRequestsPerConnection means how many http requests are allowed per one TCP Connection, and istio would close tcp connection after pod received one http request in this case.
After applying, I sent some http requests using telnet. I thought istio would accept the request once and then close the TCP connection, but istio didn't.
$ telnet httpbin 8000
Trying 10.76.12.133...
Connected to httpbin.default.svc.cluster.local.
Escape character is '^]'.
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:16 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 9
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "b042ad708e2a47a2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "b6a08d45e1a1e15e",
"X-B3-Traceid": "fc23863eafb0322db042ad708e2a47a2",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:18 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "85722c0d777e8537",
"X-B3-Sampled": "1",
"X-B3-Spanid": "31d2acc5348a6fc5",
"X-B3-Traceid": "d7ada94a092d681885722c0d777e8537",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
After this, I sent http request ten times using fortio, and I got the same result.
$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 1 -qps 0 -n 10 -loglevel Warning http://httpbin:8000/get
14:22:56 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 10 calls: http://httpbin:8000/get
Starting at max qps with 1 thread(s) [gomax 2] for exactly 10 calls (10 per thread + 0)
Ended after 106.50891ms : 10 calls. qps=93.889
Aggregated Function Time : count 10 avg 0.010648204 +/- 0.01639 min 0.003757335 max 0.059256801 sum 0.106482036
# range, mid point, percentile, count
>= 0.00375734 <= 0.004 , 0.00387867 , 30.00, 3
> 0.004 <= 0.005 , 0.0045 , 70.00, 4
> 0.005 <= 0.006 , 0.0055 , 80.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.05 <= 0.0592568 , 0.0546284 , 100.00, 1
# target 50% 0.0045
# target 75% 0.0055
# target 90% 0.014
# target 99% 0.0583311
# target 99.9% 0.0591642
Sockets used: 1 (for perfect keepalive, would be 1)
Jitter: false
Code 200 : 10 (100.0 %)
Response Header Sizes : count 10 avg 230.1 +/- 0.3 min 230 max 231 sum 2301
Response Body/Total Sizes : count 10 avg 824.1 +/- 0.3 min 824 max 825 sum 8241
All done 10 calls (plus 0 warmup) 10.648 ms avg, 93.9 qps
$
In my understanding, the message Sockets used: 1 (for perfect keepalive, would be 1) means fortio used only one TCP connection.
I guessed clients used different tcp connection for each http requests first, but if it is true, telnet connection was not closed by foreign host and fortio used ten tcp connections.
Please teach me what the function of maxRequestsPerConnection is.

Related

Unable to create user on Keycloak using below curl request. Getting error "{"error":"unknown_error"}"

I am trying to create user on keycloak using below request
echo "* Request for authorization"
RESULT=curl --data "username=admin&password=admin&grant_type=password&client_id=admin-cli" http://localhost:8080/auth/realms/master/protocol/openid-connect/token
echo "Recovery of the token"
TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`
echo "user creation"
curl -v http://localhost:8080/auth/admin/realms/teste/users -H "Content-Type: application/json" -H "Authorization: bearer $TOKEN" --data '{"username": "user12", "firstName":"User","lastName":"Test", "email":"user12#randomemail.com", "enabled":"true", "emailVerified": false, "totp": true, "credentials": [ { "type": "password", "value": "admin.1","temporary": false } ], "groups": [ "test","monitor"]} '
When i run the above request I got an error "{"error":"unknown_error"}"
Complete response of the request:
* Request for authorization
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1906 100 1837 100 69 23857 896 --:--:-- --:--:-- --:--:-- 24753
Recovery of the token
user creation
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> POST /auth/admin/realms/teste/users HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Type: application/json
> Authorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnVkE4bkRJbHU5VS1DY1RyTm1pczNQaDF5TDhGbEs0ZGE1U2tBWndtY25jIn0.eyJleHAiOjE2NjE1MDMxMjcsImlhdCI6MTY2MTUwMzA2NywianRpIjoiMWU2YmUwYzktMDYyOC00NGFhLWE1ZjMtN2UxZjk1NjdkZjczIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL2F1dGgvcmVhbG1zL21hc3RlciIsInN1YiI6IjY3Mzk3NzI2LTM0ZDAtNDQ2NC04Mjk1LWJjZGM3Yjg3YTU2YyIsInR5cCI6IkJlYXJlciIsImF6cCI6ImFkbWluLWNsaSIsInNlc3Npb25fc3RhdGUiOiI1MDRiYjRkNS0yNTkwLTQ2NmUtYTJkYy01OTQzNGRlZmM3ODciLCJhY3IiOiIxIiwic2NvcGUiOiJlbWFpbCBwcm9maWxlIiwic2lkIjoiNTA0YmI0ZDUtMjU5MC00NjZlLWEyZGMtNTk0MzRkZWZjNzg3IiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhZG1pbiJ9.ggeUu-qpOErUkS_dpESqKcX4_IaojeETpHgOfR2pwx6TdxbY-Xbnq5IP0Xw4QU88neIzZd1GDSkKrO5KGKWptypsu5lIkwD_RqwFa_DSxhsunJvd4ZndEjN7ZgE5W41XCfYfgZNe9tEUnmlhosmBznM_NgsHfMJmzdJcO6m3-kgDn0xRnRL8r-jWuOzO1hp7TrR_3RePaYw_su_GxtFZtV1gtjoDw8xz8RPY6zia6jgn86a1A5npRyeSf8gAqOcKbqIbc6DdwqX-h-0NMin0S3ipeQDHR_C_I0NKuGF1I5zBsmUK7mFPpQT3vXmTvti7TUr4KdFmX67W4_ig4T8Ung
> Content-Length: 278
>
* upload completely sent off: 278 out of 278 bytes
< HTTP/1.1 500 Internal Server Error
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< Referrer-Policy: no-referrer
< Date: Fri, 26 Aug 2022 08:37:47 GMT
< Connection: keep-alive
< Strict-Transport-Security: max-age=31536000; includeSubDomains
< X-Content-Type-Options: nosniff
< Content-Type: application/json
< Content-Length: 25
<
* Connection #0 to host localhost left intact
{"error":"unknown_error"}* Closing connection 0

NOSRV errors seen in haproxy logs

We have haproxy in front of 2 apache servers and every day for less than a minute I am getting NOSRV errors in haproxy logs. There are successful requests from the source IP so this is just intermittent. There is no entry of any error in the backend logs.
Below is the snippet from access logs:
Dec 22 20:21:25 proxy01 haproxy[3000561]: X.X.X.X:60872 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43212 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43206 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:60974 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32772 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 103 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32774 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 59 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32776 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 57 0
below is the HAproxy config file:
defaults
log global
timeout connect 15000
timeout check 5000
timeout client 30000
timeout server 30000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend Local_Server
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
mode http
option httplog
cookie SRVNAME insert indirect nocache maxidle 8h maxlife 8h
#capture request header X-Forwarded-For len 15
#capture request header Host len 32
http-request capture req.hdrs len 512
log-format "%ci:%cp[%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
#log-format "%ci:%cp %ft %b/%s %Tw/%Tc/%Tr/ %ST %B %rc %bq %hr %hs %{+Q}r %Tt %Ta"
option dontlognull
option http-keep-alive
#declare whitelists for urls
acl xx_whitelist src -f /etc/haproxy/xx_whitelist.lst
acl is-blocked-ip src -f /etc/haproxy/badactors-list.txt
http-request silent-drop if is-blocked-ip
acl all src 0.0.0.0
######### ANTI BAD GUYS STUFF ###########################################
#anti DDOS sticktable - sends a 500 after 5s when requests from IP over 120 per
#frontend for stick table see backend "st_src_global" also
#Restrict number of requests in last 10 secs
# TO MONTOR RUN " watch -n 1 'echo "show table st_src_global" | socat unix:/run/haproxy/admin.sock -' " ON CLI.
#ZZZ THIS MAY NEED DISABLEING FOR LOAD TESTS ZZZZ
# Table definition
http-request track-sc0 src table st_src_global #<- defines tracking stick table
stick-table type ip size 100k expire 10s store http_req_rate(50000s) #<- sets the limit for and time to store IP
http-request silent-drop if { sc_http_req_rate(0) gt 50000 } # drops if requests are greater the 5000 in 5 secs
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/xx_whitelist.lst }
#Slowlorris protection -send 408 if http request not completed in 5secs
timeout http-request 10s
option http-buffer-request
# Block Specific Requests
#http-request deny if HTTP_1.0
http-request deny if { req.hdr(user-agent) -i -m sub phantomjs slimerjs }
#traffic shape
#xxxx.xxxx.xx.xx
acl xxxxx.xxxxx.xx.xx hdr(host) -i xxxx.xxxx.xx.xx
use_backend xxxx.xxxx.xx.xx if xxxx.xxxx.xx.xx xx_whitelist #update from proxys
#sticktable for dos protection
backend st_src_global
stick-table type ip size 1m expire 10s store http_req_rate(50000s)
backend xxxxxxx.xxxxx.xx.xx
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web01-http x.x.x.x:80 check maxconn 100
server web03-http x.x.x.x.:80 check maxconn 100

Facebook share error: Object at URL of type 'website' is invalid because a required property 'og:title' of type 'string' was not provided

✋🏽
When I paste the URL of my blog in Facebook debugger, its not picking the title and also the image. In view source of my page, the og:title and also og:image are getting rendered but facebook scraper is not reading any.
Object at URL 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' of type 'website' is invalid because a required property 'og:title' of type 'string' was not provided.
Facebook debugger also says "{
"error": {
"message": "An access token is required to request this resource.",
"type": "OAuthException",
"code": 104,
"fbtrace_id": "BMdGG7oTu6k"
}
}"
but i don't no what does it means .... 🤔
any help is greatly appreciated 🙏🏻
When trying to fetch new scrape information for your URL through the Open Graph Debugger you get the error:
Curl Error : OPERATION_TIMEOUTED Operation timed out after 10000 milliseconds with 0 bytes received
In other words, your web server didn't reply in 10 seconds and the crawler timed out.
It looks like you configured your web server to behave differently when the request is coming from the Facebook Crawler.
You can verify this using curl.
Fetching your URL with curl's default User Agent works fine:
$ curl -v 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 149.56.140.68...
* TCP_NODELAY set
* Connected to blog.la-pigiste.com (149.56.140.68) port 80 (#0)
> GET /2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/ HTTP/1.1
> Host: blog.la-pigiste.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Wed, 20 Sep 2017 10:34:37 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 117446
< Connection: keep-alive
< Vary: Accept-Encoding
< Last-Modified: Wed, 20 Sep 2017 07:25:20 GMT
< Accept-Ranges: bytes
< Vary: Accept-Encoding
< X-Powered-By: PleskLin
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate
< Pragma: no-cache
< Expires: Mon, 29 Oct 1923 20:30:00 GMT
<
{ [956 bytes data]
100 114k 100 114k 0 0 159k 0 --:--:-- --:--:-- --:--:-- 159k
* Connection #0 to host blog.la-pigiste.com left intact
Anyway, when the Facebook crawler User Agent is used (facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)), the web server replies differently and only after about 14 seconds:
$ curl -v -A "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)" 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 149.56.140.68...
* TCP_NODELAY set
* Connected to blog.la-pigiste.com (149.56.140.68) port 80 (#0)
> GET /2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/ HTTP/1.1
> Host: blog.la-pigiste.com
> User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0< HTTP/1.1 200 OK
< Server: nginx
< Date: Wed, 20 Sep 2017 10:37:15 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< X-Powered-By: PHP/5.6.30
< X-Pingback: http://blog.la-pigiste.com/xmlrpc.php
< Link: <http://blog.la-pigiste.com/wp-json/>; rel="https://api.w.org/", <...>; rel=shortlink
< Set-Cookie: wfvt_983661238=59c244cfe4c12; expires=Wed, 20-Sep-2017 11:07:03 GMT; Max-Age=1800; path=/; httponly
< Vary: Accept-Encoding
< X-Powered-By: PleskLin
<
{ [838 bytes data]
100 124k 0 124k 0 0 8507 0 --:--:-- 0:00:15 --:--:-- 36126
* Connection #0 to host blog.la-pigiste.com left intact
Ensure that your web server replies in time and with the correct HTML and the crawler will be able to fetch your OG tags.

uwsgi long timeouts

I am using ubuntu 12, nginx, uwsgi 1.9 with socket, django 1.5.
Config:
[uwsgi]
base_path = /home/someuser/web/
module = server.manage_uwsgi
uid = www-data
gid = www-data
virtualenv = /home/someuser
master = true
vacuum = true
harakiri = 20
harakiri-verbose = true
log-x-forwarded-for = true
profiler = true
no-orphans = true
max-requests = 10000
cpu-affinity = 1
workers = 4
reload-on-as = 512
listen = 3000
Client tests from Windows7:
C:\Users\user>C:\AppServ\Apache2.2\bin\ab.exe -c 255 -n 5000 http://www.someweb.com/about/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.someweb.com (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests
Server Software: nginx
Server Hostname: www.someweb.com
Server Port: 80
Document Path: /about/
Document Length: 1881 bytes
Concurrency Level: 255
Time taken for tests: 66.669814 seconds
Complete requests: 5000
Failed requests: 1
(Connect: 1, Length: 0, Exceptions: 0)
Write errors: 0
Total transferred: 10285000 bytes
HTML transferred: 9405000 bytes
Requests per second: 75.00 [#/sec] (mean)
Time per request: 3400.161 [ms] (mean)
Time per request: 13.334 [ms] (mean, across all concurrent requests)
Transfer rate: 150.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 207.8 1 9007
Processing: 10 3380 11480.5 440 54421
Waiting: 6 1060 3396.5 271 48424
Total: 11 3389 11498.5 441 54423
Percentage of the requests served within a certain time (ms)
50% 441
66% 466
75% 499
80% 519
90% 3415
95% 36440
98% 54407
99% 54413
100% 54423 (longest request)
I have set following options too:
echo 3000 > /proc/sys/net/core/netdev_max_backlog
echo 3000 > /proc/sys/net/core/somaxconn
So,
1) I make first 3000 requests super fast. I see progress in ab and in uwsgi requests logs -
[pid: 5056|app: 0|req: 518/4997] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5052|app: 0|req: 512/4998] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5054|app: 0|req: 353/4999] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
I dont have any broken pipes or worker respawns.
2) Next requests are running very slow or with some timeout. Looks like that some buffer becomes full and I am waiting before it becomes empty.
3) Some buffer becomes empty.
4) ~500 requests are processed super fast.
5) Some timeout.
6) see Nr. 4
7) see Nr. 5
8) see Nr. 4
9) see Nr. 5
....
....
Need your help
check with netstat and dmesg. You have probably exhausted ephemeral ports or filled the conntrack table.

Is there a command-line tool that could tell me if Gzip is really on beyond the Gzip 1 header param?

Is there a command-line tool that could tell me if Gzip is on? What I'm looking for is something that can say the stream coming from the server is really gzipped even if the header params say Gzip:1 (which it could be falsely placing in the headers).
I don't see a switch in curl, or wget, or tcpdump, or anything, but maybe I'm just missing something, or perhaps there is something else that could provide me this bit of information? Any help would be appreciated.
This shows Content-Encoding: gzip indicating compressed data. The data was then in gzip format, otherwise there would have been an error.
$ curl --compressed -v http://zlib.net > /dev/null
* About to connect() to zlib.net port 80 (#0)
* Trying 69.73.181.135... connected
* Connected to zlib.net (69.73.181.135) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: zlib.net
> Accept: */*
> Accept-Encoding: deflate, gzip
>
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0< HTTP/1.1 200 OK
< Date: Tue, 20 Mar 2012 23:19:00 GMT
< Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.7a mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
< Last-Modified: Mon, 06 Feb 2012 03:46:25 GMT
< ETag: "29603b0-84b4-4b84381b0a640"
< Accept-Ranges: bytes
< Vary: Accept-Encoding,User-Agent
< Content-Encoding: gzip
< Content-Length: 9508
< Content-Type: text/html
<
{ [data not shown]
100 9508 100 9508 0 0 24955 0 --:--:-- --:--:-- --:--:-- 50574* Connection #0 to host zlib.net left intact
* Closing connection #0