How to request only for a web page header using netcat? - sed

I am not sure whether netcat has some command for requesting http header or should I use sed to filter the result? If sed is used in this case, is it so that I only have to extract everything before the first "<" occurs ?

The command line tool curl has the -I option which only gets the headers:
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.
Demo:
$ curl -I stackoverflow.co.uk
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 503
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 21:06:15 GMT

Related

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

Structured Data Testing Tool Reports "URL was not found", but the URL does exist.

When using the Structured Data Testing Tool to test my Mom's recipe site page titled Perfect Chicken Fajitas I get the following...
ERROR
The URL was not found. Make sure the domain name is correct and the server is responding with a 200 status code.
However, if I curl for the same URL, I can see that a 200 results...
$ curl -I http://www.lindysez.com/recipe/perfect-chicken-fajitas/
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Server: Microsoft-IIS/7.5
Set-Cookie: bb2_screener_=1457484500+172.4.33.122; path=/
X-UA-Compatible: IE=edge
Link: <http://www.lindysez.com/wp-json/>; rel="https://api.w.org/"
X-Powered-By: ASP.NET
Date: Wed, 09 Mar 2016 00:48:21 GMT
What could be the problem?

HAProxy 1.4: how to replace X-Forwarded-For with custom IP

I have an HAProxy 1.4 server behind an AWS ELB. Logically, the ELB sends the users IP in the X-Forwarded-For header. My app reads that header and behaves differently based on the IP (country).
I want to test that behavior overriding the X-Forwarded-For with custom IPs, but the AWS ELB appends my custom value with my current IP (X-Forwarded-For: 1.2.3.4, 200.1.130.2)
What I have been trying to do is to send another custom header X-Force-IP and once it gets into HAproxy, delete X-Forwarded-For headers and use reqirep to change the name X-Force-IP to X-Forwarded-For
This is how my config chunk looks like
acl custom-ip hdr_cnt(X-Force-IP) 1
reqidel ^X-Forwarded-For:.* if custom-ip
reqrep X-Force-IP X-Forwarded-For if custom-ip
but when it gets into my app, the app server (lighttpd) rejects it with "HTTP 400 Bad Request" as if it were malformed.
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" "http://www.example.com"
HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=mcs0tqlsg31haiavqopdvm02i6; path=/; domain=www.example.com
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Sun, 11 Jan 2015 02:57:34 GMT
Server: beta
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" -H "X-Force-IP: 321.456.7.12" "http://www.example.com"
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Date: Sun, 11 Jan 2015 02:57:44 GMT
Server: beta
From the previous it looks like the ACL is working.
I checked with tcpdump in the app server and it seems that it has deleted the X-Forwarded-For header but also deleted the X-Force-IP instead of replacing it.
[ec2-user#beta ~]# sudo tcpdump -A -s 20240 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep --line-buffered "^........(GET |HTTP\/|POST |HEAD )|^[A-Za-z0-9-]+: " | sed -r 's/^........(GET |HTTP\/|POST |HEAD )/\n\1/g'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 20240 bytes
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
Connection: close
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Connection: close
Date: Sun, 11 Jan 2015 02:56:50 GMT
Server: beta
The previous was with the X-Force-IP, and the following without it:
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
X-Forwarded-For: 123.456.7.12
Connection: close
HTTP/1.1 200 OK
X-Powered-By: PHP/5.3.4
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Connection: close
Transfer-Encoding: chunked
Date: Sun, 11 Jan 2015 02:57:02 GMT
Server: beta
^C71 packets captured
71 packets received by filter
0 packets dropped by kernel
Any help?
I was expecting to have "X-Force-IP: 321.456.7.12" converted into "X-Forwarded-For: 321.456.7.12"
Thanks!
Ignacio
The regex matching provided here doesn't do simple substitution. It's quite a bit more powerful, and has to be used accordingly.
reqrep ^X-Force-IP:(.*) X-Forwarded-For:\1 if custom-ip
The reqrep (case sensitive request regex replace) and reqirep (case insensitive request regex replace) directives operate at the individual request header level, replacing the header name and its value with the 2nd argument, if the 1st argument matches... so if there's information you want to preserve (such as the value) you need one or more capture groups, such as (.*), in the 1st arg, and a placeholder \1 in the 2nd arg, in order to do the preserve the data.
Your current configuration does indeed invalidate the request, by creating a malformed/incomplete header line.
Also, you should anchor the pattern to the left side of the header name with ^. Otherwise, the expression could match more headers than you expect.

curl, play & expect 100 continue header

consider a web service written in play, which excepts POST request (for uploads). now, when testing this with a medium size image (~75K) I've found out a strange behaviour. well, code speaks more clearly than long explanations, so:
$ curl -vX POST localhost:9000/path/to/upload/API -H "Content-Type: image/jpeg" -d #/path/to/mascot.jpg
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9000 (#0)
> POST /path/to/upload/API HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:9000
> Accept: */*
> Content-Type: image/jpeg
> Content-Length: 27442
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Content-Length: 16
<
* Connection #0 to host localhost left intact
{"success":true}
as you can see, curl decides to add the header Content-Length: 27442, but it's not true, the real size is 75211, and in play, I indeed got a body in size only 27442. of coarse, this is not the intended behaviour. so I tried a different tool, instead of curl I used the POST tool from libwww-perl:
cat /path/to/mascot.jpg | POST -uUsSeE -c image/jpeg http://localhost:9000/path/to/upload/API
POST http://localhost:9000/path/to/upload/API
User-Agent: lwp-request/6.03 libwww-perl/6.05
Content-Length: 75211
Content-Type: image/jpeg
200 OK
Content-Length: 16
Content-Type: application/json; charset=utf-8
Client-Date: Mon, 16 Jun 2014 09:21:00 GMT
Client-Peer: 127.0.0.1:9000
Client-Response-Num: 1
{"success":true}
this request succeeded. so I started to pay more attention to the differences between the tools. for starter: the Content-Length header was correct, but also, the Expect header was missing from the second try. I want the request to succeed either way. so the full list of headers as seen in play (via request.headers) is:
for curl:
ArrayBuffer((Content-Length,ArrayBuffer(27442)),
(Accept,ArrayBuffer(*/*)),
(Content-Type,ArrayBuffer(image/jpeg)),
(Expect,ArrayBuffer(100-continue)),
(User-Agent,ArrayBuffer(curl/7.35.0)),
(Host,ArrayBuffer(localhost:9000)))
for the libwww-perl POST:
ArrayBuffer((TE,ArrayBuffer(deflate,gzip;q=0.3)),
(Connection,ArrayBuffer(TE, close)),
(Content-Length,ArrayBuffer(75211)),
(Content-Type,ArrayBuffer(image/jpeg)),
(User-Agent,ArrayBuffer(lwp-request/6.03 libwww-perl/6.05)),
(Host,ArrayBuffer(localhost:9000)))
So my current thoughts are: the simpler perl tool used a single request, which is bad practice. the better way would be to wait for a 100 continue confirmation (especially if you gonna' upload a several GB of data...). curl would continue to send data until it receives a 200 OK or some bad request error code. So why play sends the 200 OK response without waiting for the next chunk? is it because curl specifies the wrong Content-Length? if it's wrong at all... (perhaps this refers to the size of the current chunk?).
so where's the problem lies? in curl or in the play webapp? and how do I fix it?
the problem was in my curl command. I used the -d argument, which is a short for --data or --data-ascii, when I should have used --data-binary argument.

Unable to upload a track using the soundcloud API larger than 7MB

I am unable to upload any tracks larger than about 7MB (413 Request Entity Too Large is returned). This functionality was previously working and the soundcloud api states that tracks can be upto 500MB.
Here is an example using curl with a successful upload(4.9MB) and an unsuccessful one(7.4MB)
I have provided dropbox links to the tracks(my own production, so no copyright issues!!!) if anyone wants to try to replicate. You will need to add your oauth_token.
successful upload = 4900kb_307sec_128kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#4900kb_307sec_128kbps_44100hz.mp3' \
-F 'track[title]=A 4.9MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Access-Control-Allow-Headers: Accept, Authorization, Content-Type, Origin
Access-Control-Allow-Methods: GET, PUT, POST, DELETE
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Date
Age: 0
Cache-Control: no-cache
Content-Type: application/json; charset=utf-8
Date: Wed, 06 Nov 2013 18:22:57 GMT
Location: https://api.soundcloud.com/tracks/118866401
Server: nginx
Via: 1.1 varnish
X-Cache: MISS
X-Cacheable: NO:Cache-Control=no-cache
X-Runtime: 436
X-Varnish: 3652774389
Content-Length: 1623
unsuccessful upload = 7400kb_307sec_192kbps_44100hz.mp3
curl -i -X POST "https://api.soundcloud.com/tracks.json" \
-F 'oauth_token=********' \
-F 'track[asset_data]=#7400kb_307sec_192kbps_44100hz.mp3' \
-F 'track[title]=A 7.4MB track' \
-F 'track[sharing]=public'
HTTP/1.1 100 Continue
HTTP/1.1 413 Request Entity Too Large
Date: Wed, 06 Nov 2013 18:23:21 GMT
Server: ECS (lhr/4799)
Content-Length: 0
Connection: close
thanks
We have the same issue with soundcloud. Seems to be an issue in their nginx.conf (webserver
configuration).
Please contact Soundcloud support for developers.
Was an issue in SoundCloud API routing. Has been fixed now.
For details see comments at:
Soundcloud: increased 413 failures (Request Entity Too Large) on upload