Format of Retry-After Header? - dropbox-api

In Dropbox's Core API best practices there is a statement:
"Apps that hit the rate limits will receive a 503 error which uses the Retry-After header to indicate exactly when it's okay to start making requests again."
This answer references the Retry-After protocol which allows for two formats:
Retry-After: Fri, 31 Dec 1999 23:59:59 GMT
Retry-After: 120
Does anyone know which format Dropbox uses?

It's the latter, where the value is the number of seconds the app should wait before trying again.

Related

How to efficiently retrieve expiration date for facebook photo URL and renew it before it expires?

Main problem:
Application caches URL's from facebook photo CDN
The photos expires at some point
My "technical" problem:
Facebook CDN "expires" header doesn't seem to be reliable ( or i don't know how to deal with them )
Using CURL to retrieve expiration date:
curl -i -X HEAD https://scontent-b.xx.fbcdn.net/hphotos-xap1/v/t1.0-9/q82/p320x320/10458607_4654638300864_4316534570262772059_n.jpg?oh=9d34386036754232b79c2208c1075def&oe=54BE4EE2
One minute before it returned: Mon, 05 Jan 2015 01:34:28 GMT
Calling it again now returned: Mon, 05 Jan 2015 01:35:27 GMT
Both times "Cache-Control" returned the same: Cache-Control: max-age=1209600
So far:
It seems like one of the most reliable ways would be to have a background job checking the photos all the time, but that feels a bit "wrong", like "brute forcing".
Having a background job would potentially allow expired pictures to be served up to the moment this photo url is "renewed"
My questions are:
Should i use the max-age parameter even tough it doesn't seem to change?
Is there a reliable way of using facebook's CDN URL?
Any other idea of how this should be implemented?
< joke >Should facebook API be used to punish badly behaving coders?< /joke >
Possible solutions ?
Check facebook for the most recent URL before serving any CDN URL
~> would slow down my requests a lot
Have a background job renewing the URL and expiration dates
~> would potentially have expired photos while the job don't "catch" them
Download the photos to my own CDN
~> not a good practice i would guess
UPDATE:
~> Perhaps Tinder actually cache user's pictures on their own CDN: https://gist.github.com/rtt/10403467 so seems like facebook is kind of OK with it?
Expires means exactly one thing, and it's not what you think it is:
The Expires entity-header field gives the date/time after which the response is considered stale. […]
The presence of an Expires field does not imply that the original resource will change or cease to exist at, before, or after that time.
— RFC 2616 §14.21, emphasis mine
If Facebook's image URLs stop working after some point in time, that's their business. Their HTTP headers don't have to mention it, and in fact, don't.
That being said, I suspect that the oe URL parameter may contain an expiration timestamp. If I interpret 54be4ee2 as a hexadecimal number containing a UNIX timestamp, I get January 20th, 2015, which is almost exactly a month from now. Might that be the value you're looking for?

Expires header for Facebook JS SDK and Google Analytics

We all know adding a far-future expiration date to static resources is a good practice to increase our websites' page load speed. So we've ensured it for all of our resources BUT the all-too-common Facebook JS SDK and Google Analytics don't do that and thus lower the entire page's speed score.
Examining the headers shows Facebook do 20 minutes:
Cache-Control public, max-age=1200
Connection keep-alive
Content-Type application/x-javascript; charset=utf-8
Date Tue, 23 Sep 2014 04:46:38 GMT
Etag "566aa5d57a352e6f298ac52e73344fdc"
Expires Tue, 23 Sep 2014 05:06:38 GMT
and Google Analytics do 2 hours:
Key Value
Response HTTP/1.1 200 OK
Date Tue, 23 Sep 2014 04:45:49 GMT
Expires Tue, 23 Sep 2014 06:45:49 GMT
Last-Modified Mon, 08 Sep 2014 18:50:13 GMT
X-Content-Type-Options nosniff
Content-Type text/javascript
Server Golfe2
Age 1390
Cache-Control public, max-age=7200
Alternate-Protocol 80:quic,p=0.002
Content-Length 16,062
Is there a way to force them to longer expiration dates?
These scripts have a short cache expire headers because they're frequently updated. When Facebook and Google add new features and fix bugs, they deploy these changes by overwriting the existing files (the ones you linked to in your question). This allows users of these services to get the latest features without having to do anything, but it comes at the cost (as you point out) of needing short cache expire headers.
You could host these scripts yourself and set far-future expire headers on them, but that would require you to manually update them when the libraries change. This would be very time-consuming and often impossible because most of these updates aren't put in public changelogs.
Moreover, doing this yourself could very likely end up being a net loss in performance because you'd lose the network cache effect that you gain due to the sheer popularity of these services. For example, I'd imagine when most users come to your site they already have a cached version of these scripts (i.e. it's extremely likely that sometime in the past two hours, the person visiting your website also visited another site that uses Google Analytics). On the other hand, if you hosted your own version, first-time visitors would always have to download your version.
To sum up, I wouldn't go out of your way to fix this "problem". Doing so would take a lot of time and probably not give you the desired effects.
The solution finally implemented was to move to Facebook's rediret API which doesn't force loading their script on each page load. It's actually what StackOverflow does here as well. Start a session in a private/incognito browser and you'll see.
This link might help: https://developers.facebook.com/docs/php/howto/example_facebook_login

500 internal server error when posting

in the past few days I started encountering a problem while trying to post to facebook...
For information about my project, I use Unity3D with the social networking plugin from Prime31 for iOS.
Initially, the user is required to login then confirm that the app is allowed to post on the user's timeline.
This happens without flaw and I get response that everything happened properly, until I attempt to post to the timeline. Then I get a 500 internal server error.
here is the error the facebook api gives out:
response error: 500 internal server error
response text:
Response Headers:
ACCESS-CONTROL-ALLOW-ORIGIN: *
EXPIRES: Sat, 01 Jan 2000 00:00:00 GMT
CONTENT-TYPE: application/json; charset=UTF-8
WWW-AUTHENTICATE: OAuth "Facebook Platform" "unknown_error" "An unknown error has occurred."
X-FB-REV: 1102018
CONNECTION: keep-alive
PRAGMA: no-cache
CACHE-CONTROL: no-store
DATE: Thu, 30 Jan 2014 15:07:54 GMT
CONTENT-LENGTH: 87
X-FB-DEBUG: q453qH5ianbIOyeof0X0Ah0PDpAlkxW9+OxLBusy2do=
and the function i am using to post to facebook is:
Facebook.instance.postMessageWithLinkAndLinkToImage(Message, URL, URLName, ImageURL, Caption, completionHandler);
I do make sure that the user is properly logged in and that the publish permissions have been given.
There has been no code changes with the plugin and my code handling the plugin before and after this issue started to occur, so clearly there is something I am missing. Is this Facebook server's fault? I have already contacted Prime31, but they say that they are not responsible for what the Facebook api responds with.
What can cause 500 internal server error when trying to post to Facebook?
EDIT: it would seem that the postMessageWithLinkAndLinkToImage boils down to:
graphRequest( "me/feed", HTTPVerb.POST, parameters, completionHandler );
A 500 means you are basically crashing the Facebook servers when you make a request. So something definitely changed on the Facebook API that is causing this to happen.
I would check the plugin code and see where it's making the API call, then you can manually make that API call using something like curl. If it fails then contact Facebook support and see if they can point you to the right API documentation.

If I enable migrations "July 2013 Breaking Changes" of my app, then search user by email wouldn't work

I'm using the search graph API to search for users by email. Here's an example of how I do that:
GET https://graph.facebook.com/search?q=Sample%40gmail.com&fields=name%2clink%2ceducation%2cid%2cwork%2cabout%2cpicture&limit=2&type=user&access_token=...
Before the July 2013 Breaking Changes it was working fine. Once I enabled the breaking changes I start getting HTTP 403 saying that that the access token is not valid.
HTTP/1.1 403 Forbidden
Access-Control-Allow-Origin: *
Cache-Control: no-store
Content-Type: text/javascript; charset=UTF-8
Expires: Sat, 01 Jan 2000 00:00:00 GMT
Pragma: no-cache
WWW-Authenticate: OAuth "Facebook Platform" "insufficient_scope" "(#200) Must have a valid access_token to access this endpoint"
X-FB-Rev: 798183
X-FB-Debug: lZPVbdTmZrCo+Bde/MNEXy/halUzQx7qIDW5aiZeT0g=
Date: Mon, 29 Apr 2013 07:25:29 GMT
Connection: keep-alive
Content-Length: 120
{"error":{"message":"(#200) Must have a valid access_token to access this endpoint","type":"OAuthException","code":200}}
Once I remove the %40 (# sign) or the '.com' part from the request I get a normal HTTP 200 results. The problem is, that it's not what I'm looking for. I want to be able to search for users by email the way I was able before.
Example of requests that does work:
GET https://graph.facebook.com/search?q=Samplegmail.com&fields=name%2clink%2ceducation%2cid%2cwork%2cabout%2cpicture&limit=2&type=user&access_token=...
GET https://graph.facebook.com/search?q=Sample%40gmail&fields=name%2clink%2ceducation%2cid%2cwork%2cabout%2cpicture&limit=2&type=user&access_token=...
As 林果皞 said. This is a bug in the graph API. I filed a bug here:
https://developers.facebook.com/bugs/335452696581712
have you try FQL?
SELECT uid,username,first_name, middle_name, pic,pic_small, pic_big,
pic_square,
last_name,name,email,birthday,birthday_date,contact_email,current_address,current_location,education,hometown_location,
languages, locale,profile_url,sex,work FROM user where
contains('youremail#example.com')
Search by email works fine to me (Access token just granted basic permissions enough):
https://developers.facebook.com/tools/explorer?method=GET&path=%2Fsearch%3Fq%3Dlimkokhole%40gmail.com%26fields%3Dname%2Clink%2Ceducation%2Cid%2Cwork%2Cabout%2Cpicture%26limit%3D2%26type%3Duser
Update:
Recently Graph API explorer app already enabled "July 2013 Breaking
Changes". So the example i've shown above whouldn't work anymore.

SIP Subscribe Receives 486 BUSY HERE

I am trying to SUBSCRIBE to a watcher list and the server frequently responds with 486 BUSY HERE. However, the RFCs describe 486 as a possible response for an INVITE, which make more sense for this response. At other times, the server does respond correctly - with a 200 OK, followed by a NOTIFY request.
I am working with an ALU IMS core.
Has anyone seen this issue?
My SUBSCRIBE Request:
SUBSCRIBE sip:yyyyyyyyyyy#example.com;transport=TCP SIP/2.0
Call-ID: 81fcd7229c882f230c726e21f16aadc9#10.0.2.15
CSeq: 4 SUBSCRIBE
From: "XXXX" <sip:yyyyyyyyyyy#example.com>;tag=92521573
To: <sip:yyyyyyyyyyy#example.com>
Via: SIP/2.0/TCP 10.0.2.15:5060;branch=z9hG4bK68630e2ec7c21d2e991854010b7f64543332
Max-Forwards: 70
Contact: <sip:yyyyyyyyyyy#10.0.2.15:5060;transport=TCP>;+g.oma.sip-im;expires=3600
User-Agent: My Android Client/OMA1.0
Require: pref
Supported: replaces,100rel,eventlist,timer
Event: presence.winfo
Accept: application/watcherinfo+xml
Route: <sip:yyyyyyyyyyy#z.z.z.z:5060;transport=TCP;lr>
Expires: 3600
Content-Length: 0
The thing to remember with SIP response codes is there are no hard and fast rules about which specific response code should be used in all situations. Invariably a real World error condition on a SIP server or UAS does not fall neatly into the definition of one of the SIP failure response codes so the closest one is used and the status message may be customised and/or a Warning header added.
The 486 response is a little bit unusual for a SUBSCRIBE request but it could easily make sense. For example if the SIP notification server maintaining the subscriptions has a limit on how many active subscriptions it will maintain or if it's overloaded and doesn't want to process subscription requests for a while.
I'd have a closer look at the 486 response and see if there is a Warning or any other informational type header. Also check whether the response is coming from the intermediate proxy you are using or the end server.
486 is not a response code define in RFC3265. You need to trace your server (if possible) to understand why it decided to send such an unexpected error code.
Sorry for not being much help. I have been working with SIP for several years and never heard of a 486 response for a SUBSCRIBE request. When you find out the reason I'd like to know about it too.