Wordpress in waiting state - redirect

I built a website for someone and I used https://gtmetrix.com to get some analytics, mainly because the wait time is huge (~20 sec) without having any heavy images. Please find attached a screenshot here:
http://img42.com/05yvZ
One of my problems is that it takes quite a long time to perform the 301 redirect. Not sure why, but if someone has a key to the solution I would really appreciate. At least some hints to search would be nice.
The second problem is after the redirection, the waiting time is still huge. As expected I have a few plugins. Their javascripts are called approx. 6 secs after the redirection. Would someone please show me some directions on where to search please?
P.S. I have disabled all plugins and started from a naked plain Twenty Eleven theme, but I still have waiting times during redirection and smaller delay after redirection.
Thanks in advance

But a few suggestions:
1 and 2.) If the redirect is adding noticeable delays; test different redirect methods. There are several approaches to this -- including HTML meta and server side (ie PHP) methods -- I typically stick to server side; if it's showing noticeable delays using a server side method, this may be a great indicator that your experiencing server issues - and may be very well your server all along causing your speed issues; contact your host provider.
3.) Take a look at the size of your media. Images and Video; also Flash if your using any. Often cases it's giant images that were just sliced / saved poorly and not optimized for web in a image editing software like PhotoShop. Optimize your images for web and re-save them at a lower weight to save significant on load time. Also, many cases nowadays and you can avoid using clunky images in general by building the area out using pure CSS3. (ie. Odd repeatable .gifs to create gradients or borders etc.)

Related

How to optimize fetching ads from DFP and google tag manager scripts?

I'm part of a team working on improving the lighthouse score of our website :
https://www.bikewale.com/m/royalenfield-bikes/classic-350/
We are concentrating on optimising javascript delivery on the page, in order to decrease the time-to-interactive. However, we noticed that scripts like gtm.js, gpt.js and loading of ads on page load, is limiting our maximum improvement to around 70 (lighthouse performance score).
After doing optimisations to javascript delivery on our end, we were able to score atmost 70. We tried removing the js files for google tag manager and gpt, and saw the score rising to 95 (approx). Also, lazy loading all ads, and hence the request to dfp gives us a boost to around 75 (we can't do this due to the first ad is in the first fold).
Please note that we have followed the guides and best practices mentioned in the following links :
gtm - https://developers.google.com/tag-manager/quickstart
gpt - https://support.google.com/admanager/answer/7485975
googletag.pubads().refresh(immediateAds); // immediateAds is array of first fold ads
The refresh method is deteriorating the performance.
Is there a way to optimise the delivery of ads and gtm scripts, in order to improve the performance? Possibly a newer version of the scripts or an alternative? Is there a way to load the first fold ad immediately, and lazy load other ads on the page, without using the refresh() method
Congrats on achieving the 70 score! It's a very respectable score for an e-commerce site.
I'm not super familiar with GTM or GPT, but I can recommend one optimization to help those libraries do their jobs more effectively: preconnect to origins from which ads are served.
For each of those origins, you should add two hints near the top of your page:
<link rel="dns-prefetch" href="https://dt.adsafeprotected.com">
<link rel="preconnect" href="https://dt.adsafeprotected.com">
The first hint asks the browser to do a DNS lookup for the origin. The second asks the browser to set up a TCP connection. Preconnect accomplishes everything dns-prefetch does, but not all browsers support preconnect. Using both hints lets you get the best performance out of as many browsers as possible
Both of these hints give the browser a head start for resources that it won't otherwise know about until later in the page load process.
Keep in mind, depending on the resources loaded, you may need two preconnect hints. You can check the waterfall chart to make sure all connections are set up at the beginning of the page load.

Google Search Console: Fetch-and-Render fails on random resources

It successfully fetches page itself, but then breaks on whatever page tries to load: images, styles, fonts, js files, API calls – whatever. Every time something different. And it says that resources are "Temporarily unreachable".
And sometimes it successfully loads and renders entire page with no errors.
Their doc says that "Temporarily unreachable" means that either server took too long to respond, or that fetch was cancelled "because too many consecutive requests were made to the server for different URLs".
The page I tested is completely loaded within 1.5-2s. Is it too long?
It makes 20 requests: 1 html, 4 css files (3 of them are 3-party font-face), 6 js files, 4 api calls (1 failed, it's intentional), 4 font files, 1 image. Total data size is 2.5Mb. Is it too much?
I checked every failed resource with their "robots.txt tester" – each of them is allowed for googlebot.
I don't have any noindex/nofollow directives nowhere on the site.
And I remind, that sometimes it just succeeds, like everything is ok.
With all of that, I have 3 questions:
Do I have to care of google's rendering at all? If I will just pre-render my html (with phantomjs or whatever) for googlebot, won't it be enough for normal indexing?
If I need google's rendering – do I have to care of that random fails? If at least sometimes fetch-and-render succeeds, maybe, it means that my site will be indexed normally?
If I have to care about that fails – what else do I have to do to make it work stably?! Such random behavior doesn't make any sense and doesn't give me any clues about that.
you may want to see this related post:
https://webmasters.stackexchange.com/questions/118727/what-else-can-i-test-when-troubleshooting-a-fetch-issue-in-google-search-console/118730#118730
My thinking is that the failed API call may have some bearing, or more likely, DNS or shared hosting issues because of the randomness of your Fetch results. Some people have reported better performance when having a robots.txt file (even if empty or just User-agent: * ), and others have found that it was just overuse of the fetch tool on their domain.

Why is our infoPath forms server very slow or very fast with same InfoPath web form?

WE have enterprise 2010 on a set of servers 3 front ends, tons of RAM. We use IE-9 exclusively. The issue is that sometimes the same web based InfoPath form opens very fast and sometimes opens very slow. By slow I mean one time the form might take up to 1 or 2 minutes by fast I mean the same form opening in a couple of seconds. This has nothing to do with the design of the form other than larger more complex forms take more minutes to open than smaller less complex forms but all forms open so slow as to be unacceptable to the user. They can open both fast or slow within the same minute. On our 2007 server they open just fine. Originally we didn't have this issue. I don't know whether it was an update, IE 9 or what. Any suggestions on what it might be or how we might diagnose this issue are welcome. thanks -dave
I have seen "random" performance issues like this when the Internet Explorer's "Automatically detect settings" option is selected. See this article. If possible in your environment, you might try disabling this on a test machine to see if this helps.

How big can I make the DOM tree without degrading performance?

I'm making a single page application, and one approach I am considering is keeping all of the templates as part of the single-page DOM tree (basically compiling server-side and sending in one page). I don't expect each tree to be very complicated.
Given this, what's the maximum number of nodes on the tree before a user on a mediocre computer/browser begins to see performance degradation? For example, 6 views stored as 6 hidden nodes each with 100 subnodes of little HTML bits.
Thanks for the input!
The short of it is, you're going to hit a bandwidth bottleneck before you'd ever hit a DOM size bottleneck.
Well, I don't have any mediocre machines lying around. The only way to find out something like that is to test it. It will be different for every browser, every CPU.
Is your application javascript?
If yes, you should consider only loading in the templates you need usinx XHR, as you're going to be more concerned with loadtime for mobile than performance on a crappy HP from 10 years ago.
I mean, hearing what you describe should be technically reasonable for any machine of this decade, but you should not load that much junk up front.
A single page application doesn't necessitate bringing all the templates down at once. For example, your single page can have one or more content divs which are replaced at will dynamically. If you're thinking about something like running JSON objects thru a template to generate the HTML, the template could remain in the browser cache, the JSON itself stays in memory, and you can regenerate the HTML without any issue and avoid the DOM size issue.

How to troubleshoot streaming video (rtmp) performance?

I'm streaming videos via rtmp from Amazon Cloudfront. Videos are taking a loooong time to start playing, and I don't have any way of figuring out why. Normally I'd use the "Net" panel in Firebug or Web Inspector to get a good first impression of when an asset starts to load and how long it takes to be sent (which can indicate whether the problem is on the server end or network versus the browser rendering). But since the video is played within a Flash player (Flowplayer in this case), it's not possible to glean any info about the status of the stream. Also since it's served from Amazon Cloudfront, I can't put any kind of debugging or measuring tools on the server (if such a tool even exists).
So... my question is: what are some ways I can go about investigating this problem? I'm hoping there would be some settings I can tweak on either the front-end (flowplayer) or back-end (Cloudfront), but without being able to measure anything or even understand where the problem is, I'm at a loss as to what those could be.
Any ideas for how to troubleshoot streaming video performance?
You can use WireShark (can diessect RTMP) or Fiddler to check what is going on... another point (besides the client and the server) to keep in mind is your ISP.
To dig deeper you can use this http://rtmpdump.mplayerhq.hu/ OR http://www.fluorinefx.com/ OR http://www.broccoliproducts.com/softnotebook/rtmpclient/rtmpclient.php.
You need to keep in mind that RTMP isn't ideal since it usually bypasses proxies and tries to make direct connection... if this doesn't work it can fallback, but that means that some time has already passed (it wait for a connection timeout etc.)... if you have an option to set CloudFront/Flowplayer to RTMPT then I would recommend doing so since that uses Port 80 for the connection.
Presumably - if you go and attempt to view a video - then come back 20min later and hit it again - it loads quickly?
SAN -> Edge Servers ---> Client
This is all well and good in a specific use case (i.e. small filesize of the origin content, large long running cache) - but, it becomes an issue when it's scaled out, with lots of media hosts running content through the system i.e. CloudFront.
The media cache they keep on their edge servers gets dumped fairly often - after the cache is filled - start dumping from the oldest file in cache - so if you have large video files that are not viewed often - they won't be sitting in the edge server cache, and take a long time to transfer to the edges - thus, giving an utterly horrific end user experience.
The same is true of youtube, for example - go and watch some randomly obscure, high duration video - and try it through a couple of proxies, so you hit different edge servers, you'll see exactly the same thing occur.
I noticed a very noticable lag when streaming RMTP from cloudfront. I found that switching to straight http progressive from the amazon S3 bucket made the lag time go away.