When uploading a photo with the Facebook REST API, I occasionally get a "503/Failed to write to server" from Facebook.
I see in a capture of the TCP traffic that while I am sending the image with SSL, the Facebook server will suddenly send an Encrypted Alert (code21) and shortly after send a bunch of RST flags. This always happens in the middle of the transfer.
I have tested with various different images and it seems like the bigger the image, the more likely it is to fail. For example, a 3kb image will have a 100% success rate. A 400kb image will have ~50% success rate. A 600kb will have ~25 success rate and a 800kb or bigger image seems to fail all the time.
I should also add that there are times when even a much larger image (2000kb) will upload successfully many times in a row but once one failure happens, the results I mentioned above start to happen.
So my question is, what can be causing this behavior? Why would the upload fail on one attempt and succeed on subsequent attempts with the same image?
Related
I am working on the web vitals for a website and I was checking the Chrome Developer Tools the Network tab. The website loads fully, but I see that in the network tab, the server requests keep on increasing and the resources requested go up to 7.8MB and the website has a slider that keeps repeating in the network. How can I check why so many requests are made?
Here is the picture of the network tab of the website.
I see that the resource names are slide-X.jpg. Without seeing the website or its code, I can only guess that there's a carousel on the page that cycles through images. If the images aren't cacheable, they'd continue to be loaded over the network. Otherwise if they are cacheable, I'd expect to see no network requests at all or at worst a 304 HTTP "Not Modified" response code.
So I'd recommend confirming what kinds of widgets are on the page like a carousel with repetitive behavior and checking the cache control headers of static content like images to avoid the need to load the images each time. Personally, I think carousels are bad UX so I'd even suggest you consider removing it all together! Regardless, you should still cache your content more efficiently.
I am currently developing my iOS application which includes some feature uploading images in one time along with its album name.
I came up with a solution to use encoding base64 image in order to send nested json format instead of using multipart form data method.
My question is on my localhost, it seems like my application is capable to send many pictures in one time, let say 15 pictures. However, when it comes to sending through my web server (Amazon EC2 free tier), it seems like my application is capable to send up to 4 pictures at a time, if there are more than 4 pictures, nothing would appear.
I have tried to debug on networking part, it turns out that status 200 is returned with no images sent. My question is that does the problem occur due to server stuff or something ??
Updated
I think I've found some important insight. I will classify into two scenarios. What I have found on simulator debugging is that
i) I use a simulator to connect with my server. When I send only one picture, the size of it is around 252 bytes. Sending two pictures, they are 450 bytes. The weird thing is that sending more than 3 pictures, its size is calculated to be only 208 bytes. This is very weird, it is supposed to be higher when a number of pictures increases.
However, I remember that things work perfectly fine on my localhost; thus I try to debug on a simulator that connects with localhost to figure it out more.
ii) I use a simulator to connect with localhost. When I send one picture, it has 252 bytes. Sending 2 pictures, it is around 450 bytes and sending 4 pictures, it is around 1152 bytes. We can see that data's size here is growing when the number of pictures is increasing. The scenario ii) does make sense.
Anyway, I still have no idea what causes this problem; I believe this should involve with server stuff for sure. Please help !!
1.Through jmeter recorded the script with out images.
2.Run the script by keeping 10 users.
3.Jmeter will show the execution and response time.
But how can we justify and show evidence to top level managemnt that even though with out capturing images the application response time is same as live user experience.
The better way to do it would be to check the option "Retrieve All Embedded Resources" in: Thread Group Right Click -> Sampler -> HTTP Request -> Advanced -> Retrieve All Embedded Resources so all resources all loaded.
If you don't want to measure the embedded resources response times, for example if you are using a CDN or 3rd parties, you can use a "View Results Table" and enable the option "Child samples". This way you can see the response time from the main requests and from the embedded resources separately.
The issue is, secondary requests are made in parallel threads, so the sum of the response times is larger than the response time registered by the Transaction Controller. To avoid this you can select the option "Parallel downloads. Number", next to "Retrieve All Embedded Resources" in "HTTP Request" and enter the number of "parallel downloads".
Moreover you may link to follow blog :
https://www.redline13.com/blog/
Comparing the two directly is not an apples-to-apples comparison, because they measure different things. Actually, Load Tester measures a lot of the same things for each, but what is generally considered the most important metric – Page Duration – is actually measuring a different aspect of performance in each case.
Virtual Browsers:
Virtual Browsers work at the HTTP layer – they send the same HTTP messages to the server that real browsers would send. The Page Duration measures the time from the beginning of the first request that is sent to the server to the end of the last response for a resource on that page.
Our Virtual Browsers(JMeter) will use the same number of connections to the server as a real browser. And it will distribute the requests amongst those connections in a very similar way: it will use inactive connections first, connections remain open for a while, etc. When done correctly, the target application cannot tell the difference between our Virtual Browser and a human operating a real browser.
Real Browsers:
well, REAL browsers that are driven by our virtual user instead of a human user. The driving takes place via APIs into the browser that are designed for automation (Eg : JMeter Selenium Web Driver)
For example, a Go To URL step instructs the browser to navigate to a URL. The Duration of that step measures from the time the command is sent to the browser until the browser reports completion (or failure). In the case of the Go To URL command, the command is complete when the browser fires the “On Load” event. This step will include the amount of time to get all the resources from the server – which is what is measured by the Virtual Browsers. It will ALSO include the amount of time the browser takes to render the page on the screen, which is not measured by the Virtual Browsers (since they never render the page).
The following code fragment
for($i=0;$i<60;$i++){
$u[$i]=$_REQUEST["u".$i];
$pic[$i] =imagecreatefromjpeg("http://graph.facebook.com/".$u[$i]."/picture");
}
is taking more than 90 seconds to execute on my new server. It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
The data center of my new server is Asia Pacific.
Please advice on how I can reduce this time of fetching images on the graph.
thanks and regards
Why not just request all the pictures' URLs in a single call?
https://graph.facebook.com/?fields=picture&ids=[CSV LIST OF IDS]&access_token=ACCESS_TOKEN
You'll then have a list of all the images and can fetch them all however you so wish
is taking more than 90 seconds to execute on my new server.
Well, for 60 HTTP requests that’s not too bad, I’d say.
It was taking less than 15 seconds on my shared hosting server. However, on dedicated server it is taking more than 90 seconds.
Maybe the connection of your old server was just faster …?
The data center of my new server is Asia Pacific.
Do you know by any chance, which one it was before?
Please advice on how I can reduce this time of fetching images on the graph.
Do you have to request all these images in one go?
Maybe your app’s workflow (which we don’t know anything about yet) would allow for other approaches, like getting user images at a previous time (f.e. when a user starts using your app) and cache them locally, so that you don’t have to do 60+ HTTP requests in one go.
I'm streaming videos via rtmp from Amazon Cloudfront. Videos are taking a loooong time to start playing, and I don't have any way of figuring out why. Normally I'd use the "Net" panel in Firebug or Web Inspector to get a good first impression of when an asset starts to load and how long it takes to be sent (which can indicate whether the problem is on the server end or network versus the browser rendering). But since the video is played within a Flash player (Flowplayer in this case), it's not possible to glean any info about the status of the stream. Also since it's served from Amazon Cloudfront, I can't put any kind of debugging or measuring tools on the server (if such a tool even exists).
So... my question is: what are some ways I can go about investigating this problem? I'm hoping there would be some settings I can tweak on either the front-end (flowplayer) or back-end (Cloudfront), but without being able to measure anything or even understand where the problem is, I'm at a loss as to what those could be.
Any ideas for how to troubleshoot streaming video performance?
You can use WireShark (can diessect RTMP) or Fiddler to check what is going on... another point (besides the client and the server) to keep in mind is your ISP.
To dig deeper you can use this http://rtmpdump.mplayerhq.hu/ OR http://www.fluorinefx.com/ OR http://www.broccoliproducts.com/softnotebook/rtmpclient/rtmpclient.php.
You need to keep in mind that RTMP isn't ideal since it usually bypasses proxies and tries to make direct connection... if this doesn't work it can fallback, but that means that some time has already passed (it wait for a connection timeout etc.)... if you have an option to set CloudFront/Flowplayer to RTMPT then I would recommend doing so since that uses Port 80 for the connection.
Presumably - if you go and attempt to view a video - then come back 20min later and hit it again - it loads quickly?
SAN -> Edge Servers ---> Client
This is all well and good in a specific use case (i.e. small filesize of the origin content, large long running cache) - but, it becomes an issue when it's scaled out, with lots of media hosts running content through the system i.e. CloudFront.
The media cache they keep on their edge servers gets dumped fairly often - after the cache is filled - start dumping from the oldest file in cache - so if you have large video files that are not viewed often - they won't be sitting in the edge server cache, and take a long time to transfer to the edges - thus, giving an utterly horrific end user experience.
The same is true of youtube, for example - go and watch some randomly obscure, high duration video - and try it through a couple of proxies, so you hit different edge servers, you'll see exactly the same thing occur.
I noticed a very noticable lag when streaming RMTP from cloudfront. I found that switching to straight http progressive from the amazon S3 bucket made the lag time go away.