What is `ff.im`? - redirect

When we visit fm.im, we are redirected to http://friendfeed.com.
Here are some other examples:
ff.im/abc
ff.im/efg
How is FriendFeed able to do this?

.im is the Isle of Man top-level domain (ccTLD). The registry normally requires names to be at least three characters long, unless you pay considerably more.
Two-character domains look cool but aren't particularly useful since IE rejects their cookies (old article, but still mostly true for newer IE versions).
When your browser requests ff.im:
GET / HTTP/1.1
host: ff.im
their webserver responds with a redirect, either to the main FriendFeed site:
HTTP/1.1 302 Found
Date: Sat, 09 Apr 2011 12:29:38 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Content-Length: 0
Location: http://friendfeed.com/
Server: FriendFeedServer/0.1
or to some other place (when using their URL-shortener).

Related

Does google chrome and similar browsers support range headers for standard downloads

My initial response headers - notice the Accept-Ranges header
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin
Access-Control-Allow-Credentials: true
X-RateLimit-Limit: 1
X-RateLimit-Remaining: 0
Date: Thu, 08 Apr 2021 06:14:19 GMT
X-RateLimit-Reset: 1617862461
Accept-Ranges: bytes
Content-Length: 100000000
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="some_file.txt"
Connection: keep-alive
Keep-Alive: timeout=5
I then restart the server and click resume download in chrome, but chrome doesn't send back in Range request headers
I'm following the documentation on Mozilla's website
Am I missing a header or misunderstanding how this works, especially with chrome and other browsers? Is there another way I can manually support resuming downloads by sending the right response and understanding the right request? From a technical perspective, if chrome sends back which range it now needs I will be able to resume the download.
According to this article, chrome should support something like this. I just need to be pointed in the right direction.
Ty!
Chrome needs some way to know that the file it's trying to download at that URL is indeed the same file when it tries to resume.
If you add support for an ETag header, this will likely work.

406: not acceptable response received using LWP::UserAgent/File::Download

Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?

How to login to RQM using REST API?

I'm trying to communicate with IBM Rational Quality Manager server using its REST API. I'm using RESTClient browser plugin, and while the browser is logged in, everything works as expected. For the record, my requests look like
https://server/qm/service/com.ibm.rqm.integration.service.IIntegrationService/resources/project/testscript/urn:com.ibm.rqm:testscript:42
However, if I wait long enough for RQM to logout, REST API says I need to login back to proceed (see below). I'm pretty sure this is possible to do via the API itself, because RQM ships with RQMUrlUtility which accepts username and password and runs basically the same REST requests I'm using:
java -jar RQMUrlUtility.jar -command GET -user JazzUserID -password JazzPassword -filepath pathtoFile -url REST_URL
So far, I have found this topic explaining how to login using HTTP basic authentication. Following this advice, I have added Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= (not my real password) to the request, but RQM still fails to login. I have also tried setting User-Agent to a bogus value, as well as sending the value from JSESSIONID in X-Jazz-CSRF-Prevent header as described here, but regardless of all these headers being present or not, I get the same response:
Status Code: 200 OK
Cache-Control: no-cache="set-cookie, set-cookie2"
Connection: Keep-Alive
Content-Encoding: gzip
Content-Language: en-US
Content-Type: text/html; charset=UTF-8
Date: Tue, 26 Jan 2016 15:48:02 GMT
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Keep-Alive: timeout=10, max=100
Set-Cookie: JazzFormAuth=Form; Path=/qm; Secure
x-com-ibm-team-scenario=ac55f959-c738-4ef0-854d-6e37648edcba%3Bname%3DInitial+Page+Load%3Bextras%3D%2Fqm%2Fauth%2Fauthrequired%2C1453823282026; Path=/
Transfer-Encoding: chunked
X-Powered-By: Servlet/3.0
X-com-ibm-team-repository-web-auth-msg: authrequired
Can anyone with experience with RQM API tell me what's wrong? Or perhaps I'm missing something basic, common to most RESP APIs out there?
Could it be your header name?
Authorisation: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Should probably be:
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Notice the "z".

iOS (iPhone/iPad): downloading a big PDF via Safari doesn't work

I've a small site designed to sell a HTTP-downloadable, ~300 MB PDF, No-DRM, page-scanned images e-book (download the test copy here http://test.magicmedicine.eu/get/ac123457965d0d4b4d17557a73cf2fe8 ).
It works flawlessly on PC, Mac and Android, but I'm experiencing issues with iOS: when the customer opens (I tried via broadband Wi-Fi+DSL) the download URL via Safari, the page loads for ~45 seconds (the page is blank but the activity indicator rotates), then Safari exits with no error messages at all.
I tried to create the PDF with the "Fast web view" (=progressive download) attribute and I also lowered the compatibility to the minimum (PDF version 1.3), with no results.
Application-side, the download is sent from Apache+PHP via mod_xsendfile ( https://tn123.org/mod_xsendfile/ ) to the client with the following headers (my intent is to avoid the PDF-in-the-browser-via-plugin view):
HTTP/1.1 200 OK
Date: Wed, 23 May 2012 09:50:13 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.13
Expires: Thu, 24 May 2012 11:50:13 +0200
Cache-Control: must-revalidate, post-check=0, pre-check=0
Pragma: public
Content-Disposition: attachment; filename="book.pdf"
Last-Modified: Sun, 20 May 2012 11:26:54 GMT
ETag: "2e01b4-dde8a9b-4c07610070008"
Content-Length: 232688283
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: application/octet-stream
Any ideas?
Note: I asked this on SuperUser a couple of days ago and was closed as "off topic". I hope here it's ok to repost it here.

Facebook links to my site resolve as 403 forbidden

Hi I'm experiencing a super weird problem.
Whenever I post links to my website on Facebook, they come up as Forbidden.
The site itself works great and I have no seen this when linking on other sites.
Could this be a server misconfiguration? Any thoughts on where to look?
here's some Info:
I have a dedicated server running WHM 11.25.0
i have 2 sites hosted here using cPanel 11.25.0
the error msg:
Forbidden You don't have
permission to access
/blog/deepwater-horizon-11/ on this
server. Additionally, a 404
Not Found error was encountered while
trying to use an ErrorDocument to
handle the request.
Apache/2.2.14 (Unix)
mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2
mod_auth_passthrough/2.1
mod_bwlimited/1.4 FrontPage/5.0.2.2635
Server at www.offshoreinjuries.com
Port 80
UPDATE:
Here is a sample link if it helps. (notice going the linked page directly works fine)
http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
UPDATE and ANSWER:
Found the issue and added a complete answer below.
You must have a rule somewhere that reads the HTTP_REFERER and rejects incoming links from Facebook. Seriously. This is what happens between the lines:
No referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:19:45 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK, good.
Facebook referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
HTTP/1.1 403 Forbidden
Date: Fri, 28 May 2010 09:21:04 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
Content-Type: text/html; charset=iso-8859-1
403 Forbidden, bad.
Any other referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://alvaro.es/
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:20:36 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK again.
Your server is actively rejecting visitors from Facebook.
I was finally able to get to the bottom of this behavior.
The default mod_security settings of my host, HostGator include a set of whitelists and blacklists. Upon inspecting these I found .facebook.com/l.php blacklisted.
l.php is a wrapper page that provides a warning that you are leaving facebook. As I understand it since this can be easily exploited, HostGator chose to essentially blacklist all outbound facebook links.
I fixed my problem by removing .facebook.com/l.php from the mod_security blacklist, however I could have also just reset my mod_security settings to Default (vs the HostGator config) via a single click in WHM.