Rate limit dropping drastically - stocktwits

Hello I am noticing some strange behavior around request rates last night and this morning. I am doing a simple curl and my rate has gone from 68 -> 65 -> 61.
curl -v -X GET https://api.stocktwits.com/api/2/streams/symbol/AAPL.json
Why is this happening? Does it have to do with the large scale DDoS yesterday? I checked the announcement mailing list and I didn't see anything regarding this. When will this be back to normal?

The rate limit is based on the IP of the user. We were having some problems getting the correct IP from cloudflare's reverse proxy but this has been fixed.

Related

Concurrent Connection Test

So I ran into a network problem the other day and I was trying to find a way to test for this problem in the future.
I had a lot of users online at once and hit my routers max IP connection limit (not DHCP! TCP/UDP connections.)
Once I figured out what the problem was it was fairly simple to fix however I was wondering if there is any way to simulate this kind of activity? Everything worked fine when I tested it, it wasn't until I had 150+ users that I discoved I had a problem.
I have spent the last 3-4hrs looking for such a test/audit tool. Here is what I found:
http://bittwist.sourceforge.net/ -DDoS simulator (can't make it work, barly get +300 connections)
http://stevesouders.com/hpws/max-connections.php -Browser concurrent connection tester (This hits the browser limit (6 in chrome) w/o making a dent on my router even open in 70+ tabs at the same time)
http://www.smallnetbuilder.com/lanwan/lanwan-howto/31103-how-we-test-hardware-routers-revision-3 -Some tool linked about halfway down the page (Reads like its exactly what I want, however it barely has a noticable effect on my router.)
http://www.http-kit.org/600k-concurrent-connection-http-kit.html -Concurrent HTTP connection simulator (This one seems like it would do what I want, but my linux-foo is limited and I can't get it working. tear)
So do you guys have a tool to test your routers with? I would love something that does both TCP/UDP.
(btw, for anyone misunderstanding I'm not trying to test "speed", just sheer number of connections)
Thanks!
Kz

Why does my Github webhook keep timing out?

We couldn’t deliver this payload: Service Timeout
I was successfully sending webooks to my server 5 minutes ago, and now I just keep getting timeouts. I tried deleting the webook and re-adding it, changing the URL it points to, but nothing.
Am I flooding it with too many pushes, or is GitHub's webhook service just down?
It also turns out that GitHub has a 10-second timeout set on their webhooks. That is what I ran into. See the documentation here.
Unless there is some kind of error on the GitHub side (which doesn't seem to be the case at the moment, given their "System Status" history), you might check the program receiving the payload of that webhook.
See a similar problem in Supybot-plugins 225:
I contacted GitHub support and one of the employees has been troubleshooting this for me. Here is part of what he had to say about the issue:
I just tried making a request manually from one of our machines, and that went through with no error (see curl -v output below).
However, I did notice that it took extremely long for the request to be processed -- over 15 seconds (for 2 bytes of data).
Decoupling the listening and reception of the payload, from its proicessing, is generally the right approach, as I recommended ion "Perl Script slow over Tomcat 6.0 and generates service time out".
The first part should be as fast as possible.

How long does an app in development get banned from Facebooks if it exceeds limits?

I have an app I'm developing against Facebook that timed out a few hours ago during my first production use. Of course I tried to get it do too much and the http call timed out. So, I rewrote what I was doing to use threaded connections, which sped up the interaction significantly! However, I was so engrossed in getting my interaction to speed up (it equated to about 25-50 calls, not exactly sure, I was expecting 25 but some of my results show it was 50 times), I didn't even stop to think about how fast I was hitting facebook.
So, I started getting the "Uncaught OAuthException: It looks like you were misusing this feature by going too fast. You窶况e been blocked from using it." which is what I now get even if I try to run my program with only 1 hit. I've added a sleep into my system to limit the hits at 1/second, but I'm concerned that my app (that was not making public posts so no one could have been bothered by them) is now forever banned from facebook, as it says I'm banned from the feature with a reference to learn about blocks in the Help Center; except I can't find any reference in the Help Center to my specific situation.
Does anyone know how long my app is out of commission?
And what are the specific (reference please, because I've search the hell out of fb and can't find one) limits regarding speed at which you can access facebook?
It depends on what has blocked you. In this case it was a spam bot that stopped me from posting comments into a group. Apparently there is a non-specific number of times you can post comments in a group in a short amount of time. The amount varies, but hovers around 150ish give or take 50 (at the time of my tests).
The ban appeared to be consistently set to about 19 hours at that time (May 2014). I've confirmed by continued testing in test groups and subsequent bans. However, Facebook developers are unable to give a solid set of numbers as they say it's controlled by a spam algorithm which changes based on server usage. So, 150 comments within about 3 minutes = ban for about 19 hours.

DFP returning empty vast files (ads)

I am using DFP to serve ads, and fiddler to monitor the requests.
After 3 GET requests to the dfp server, in a short period of time (say 30 seconds), every subsequent request will a list of empty ads.
Does DFP have some sort of spam protection? if so, is there a way around it? debugging an ad implementation is quite slow when you're ad payloads are empty!
There is definitely some rate limiting going on within DFP... I have run into this many times! I think it may be per ad unit from what I can tell... and it doesn't last very long....
As for debugging have your tried the DFP google console? That makes debugging a lot easier... and I am pretty sure it will give you the diagnostics you need to debug without the rate limit being an issue.
Have you looked into adding a "corrolator" value in the parameters?
Just &c=rand(10000,99999).

Facebook Graph Latency

The following code fragment
for($i=0;$i<60;$i++){
$u[$i]=$_REQUEST["u".$i];
$pic[$i] =imagecreatefromjpeg("http://graph.facebook.com/".$u[$i]."/picture");
}
is taking more than 90 seconds to execute on my new server. It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
The data center of my new server is Asia Pacific.
Please advice on how I can reduce this time of fetching images on the graph.
thanks and regards
Why not just request all the pictures' URLs in a single call?
https://graph.facebook.com/?fields=picture&ids=[CSV LIST OF IDS]&access_token=ACCESS_TOKEN
You'll then have a list of all the images and can fetch them all however you so wish
is taking more than 90 seconds to execute on my new server.
Well, for 60 HTTP requests that’s not too bad, I’d say.
It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
Maybe the connection of your old server was just faster …?
The data center of my new server is Asia Pacific.
Do you know by any chance, which one it was before?
Please advice on how I can reduce this time of fetching images on the graph.
Do you have to request all these images in one go?
Maybe your app’s workflow (which we don’t know anything about yet) would allow for other approaches, like getting user images at a previous time (f.e. when a user starts using your app) and cache them locally, so that you don’t have to do 60+ HTTP requests in one go.