Facebook Graph Latency - facebook

The following code fragment
for($i=0;$i<60;$i++){
$u[$i]=$_REQUEST["u".$i];
$pic[$i] =imagecreatefromjpeg("http://graph.facebook.com/".$u[$i]."/picture");
}
is taking more than 90 seconds to execute on my new server. It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
The data center of my new server is Asia Pacific.
Please advice on how I can reduce this time of fetching images on the graph.
thanks and regards

Why not just request all the pictures' URLs in a single call?
https://graph.facebook.com/?fields=picture&ids=[CSV LIST OF IDS]&access_token=ACCESS_TOKEN
You'll then have a list of all the images and can fetch them all however you so wish

is taking more than 90 seconds to execute on my new server.
Well, for 60 HTTP requests that’s not too bad, I’d say.
It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
Maybe the connection of your old server was just faster …?
The data center of my new server is Asia Pacific.
Do you know by any chance, which one it was before?
Please advice on how I can reduce this time of fetching images on the graph.
Do you have to request all these images in one go?
Maybe your app’s workflow (which we don’t know anything about yet) would allow for other approaches, like getting user images at a previous time (f.e. when a user starts using your app) and cache them locally, so that you don’t have to do 60+ HTTP requests in one go.

Related

How to archive live streaming in Azure Media Service

I am trying to use Azure Media service to do the 7x24 live streaming and also try to persist the streamed video. In the document it said the Live Output can set the archiveWindowLength upto 25 hours for VOD. But not really able to persist the whole streamed videos.
Any idea about how to achieve it. I am quite new in this area. Any help is appreciated.
The DVR window length for a single Liveoutput is 25 hours. The reason for the 25 hours is to provide a 1 hour overlap for you to switch to a second Liveoutput with a new Asset underneath.
Typically how i set this up is to have an Azure Function and Logic App running with a timer to ping-poing between two LiveOutputs. You have to create a new LiveOutput with a new Asset.
Think of LiveOutputs as "tape recorders" and the Asset is the "tape". You have to swap between tape recorders and switch tapes ever xx-hours.
You do not necessarily have to wait a full 25 hours though. I actually recommend not doing that because of the size of the manifest gets really huge. Sometimes loading such a large HLS or DASH manifest on a client can really mess with memory and cause some bad things to happen. So, you could consider doing the ping-pong between your "tape recorders" every 1 hour.
If you wish to "publish" the live event to your audience with a smaller DVR window (say 10 minutes or 30 minutes) you could additionally create a 3rd LiveOutput and Asset and leave that one set to a DVR window of 30 minutes and leave it running forever.

How does Fabric Answers send data to the server, should events be submitted periodically or immediately?

I've used Fabric for quite a few applications, however I was curious about the performance when a single application submits potentially hundreds of events per minute.
For this example I'm going to be using a Pedometer application, in which I would want to keep track of the amount of steps users are taking in my application. Considering the average user walks 100 steps per minute, I wouldn't want the application to be sending several dozen updates to the server.
How would Fabric handle this, would it just tell the server "Hey, there were 273 step events in the last 5 minutes with this meta deta" or would it sent 273 step events.
Pedometer applications typically run in the background so how would we get data to Fabric without the user opening the application
Great question! Todd from Fabric. These get batched and sent at time intervals and also certain events (like installs) trigger an upload of the queued events data. You can watch our traffic in Xcode debugger if you are curious about the specifics for your app.

Import multiple photos to parse.com database

I'm trying to build an navigation app with place location and its photos.
I have 200 spot location names (String), its location (GeoPoints), and its image (JPG).
is it possible to upload the database including the image instantly?
I only managed to upload the String and GeoPoints database using json, but still can't do it for the image file.
anyway,
clicking one by one is definitely not an option. I got 200 images and still counting. It might reach 500 or more in several weeks.
thank you in advance,
how large are the images?
if you can scale (photos) them down a little bit and if you have multiple threads on the httpclient being used with parse.com then you should be able to saturate the WIFI / ISP bandwidth available to your device.
ie if you've got 10 Mb available upstream to the ISP then you ought to be able to optimize the use of multiple , async connections up so that you are pushing close to 10Mb of photos to parse.com.
It probably wont help much ( parse - android example ) but this was precisely the target of this question.
63 photos ( each 70K in sz ) upload in 3 seconds total .

What is the best way to process 25k API calls in 1 hour?

I'm working on a tracking site that that tracks a player's levels for a game from day to day.
It's going to have to process around 25,000 API calls once a day. I'd like to be able to get this done in 1 hour but I would be okay with processing them all in 2 hours for now.
This is the API I would need to call for each player in my database to get their information: http://hiscore.runescape.com/index_lite.ws?player=Zezima
My site and database are hosted on a VPS.
My thought on how to achieve this is to spin up a handful of Digital Ocean VPS instances when the time comes to make the API calls and have my main VPS distribute the API calls across the DO instances which will make the API calls and insert the results back into my database.
Parallelization is your friend here. Pool your queue listeners and have them run on a machine with adequate CPU and memory.
How fast is your process? Completing 25,000 transactions in one hour means 7 per second. Do you have timing data to help guide the number of instances you'll need?
I'm assuming that your database will allow simultaneous INSERTs. You don't want those getting in each other's way.

Need inputs on optimising web service calls from Perl

Current implementation-
Divide the original file into files equal to the number of servers.
Ensure each server picks one file for processing.
Each server splits the file into 90 buckets.
Use ForkManager to fork 90 processes, each operating on a bucket.
The child processes will make the API calls.
Merge the output of child processes.
Merge the output of each server.
Stats-
The size of the content downloaded using the API call is 40KB.
On 2 servers, the above process for a 225k user file runs in 15 minutes. My aim is to finish a 10 million file in 30 minutes. (Hope this doesn't sound absurd!)
I contemplated using BerkeleyDB but, couldn't find how do I convert the BerkeleyDB file into normal ASCII file.
This sounds like a one-time operation to me. Although I don't understand the 30 minute limit, I have a few suggestions I know from experience.
First of all, as I said in my comment, your bottleneck will not be reading the data from your files. It will also not be writing the results back to a harddrive. The bottleneck will be in the transfer between your machines and the remote machines. Your setup sounds sophisticated, but that might not help you in this situation.
If you are hitting a webservice, someone is running that service. There are servers that can only handle a certain ammount of load. I have brought down the dev environment servers of a big logistics company with a very small load test I ran at night. Often, these things are equipped for long-term load, but not short, heavy load.
Since IT is all about talking to each other through various protocols, like web services or other APIs, you should also consider just talking to the people who run this service. If you have a business-relationship, that is easy. If not, try to find a way to reach them and to ask if their service is able to handle so many requests at all. You could end up with them excluding you permanently because to their admins it looks like you tried to DDOS them.
I'd ask them if you could send them the files (or an excerpt of the data, cut down to what is relevant for processing) so they can do the operations in batch on their side. That way, you remove the load for processing everything as web requests, and the time it takes to do these requests.