while using Tiny mce pages take for ever to load - tinymce

I have a few forms that use tiny mce. I have noticed recently that the page takes for ever to load (over 2 minutes), as soon as I comment out the text area that uses tiny mce the page loads just fine (under 5 seconds). I have no clue what is going on since it was working just fine in my local machine until last week. I'm using apache2, php 5, mysql and xajax.
I have been using xdebug to find out what is wrong and all the code finishes running in the server side, but the browser keeps waiting for the page to finish loading, making the navigation and the form impossible to use.
Any leads on what could be going on will be of great help.

Have you taken a look at Firebug on the Net tab to see wether some file is being loaded too long?

Related

Mysterious severe performance issue on mobile Safari for just one web page

I have a very large (as in feature-rich) responsive website. It consists of over 150 different UI pages, and so far both rendering and performance on mobile are fine (I'm using an iPhone5 to test, and occasionally other devices).
Except for one page, which I am coding now. Here's the temporary dev URL:
http://www.jungledragon.org/apps/jd3/daylight
On Mobile Safari, this page performance extremely poorly:
- It takes several seconds to load, much slower than all other pages
- Once loaded, a touch scroll can take 5-10 secs to do anything
- Mobile Safari as a whole becomes non responding or close to it
I'm trying to troubleshoot the root cause of the issue, but no luck so far. I cannot reproduce this on any desktop browser using a small viewport, not even on desktop Safari. On the desktop, I've inspected several web debuggers to check for any long-running processes, but found none.
Some explanation on what the page does:
It will try to detect your current location (using alerts I discovered this takes little time)
Based on your current location and the current date, it will calculate the sun times for the day. This too is nearly instant
Based on the suntimes, it will dynamically generate a table, and then finally show it on screen
Here's the what I am seeing in detail on mobile Safari:
The server response is fine, the page loads quickly and shows the site header soon
Next, the content body is blank and stays blank for several seconds (which I cannot explain)
Finally, the suntimes table renders.
This completes the page, yet as of this point, the page as well as the browser are extremely sluggish, scrolling takes forever, and Safari controls are nearly irresponsive. It looks and feels as if the browser can crash any moment.
Based on my research so far, and given fine performance in all other pages on the site, I'm totally in the dark on what causes this.
Edit: Using BrowserStack I did some more tests:
iPhone 4S: no issues
iPhone 5S: no issues
Galaxy SII: no issues
HTC One X: no issues
iPhone 5: same issue as above
So I'm not seeing the issue on any desktop browser, and on no mobile device except for the iPhone 5 (iOS7).
Edit2: adding more findings and explanation based on comments received:
The issue does not seem animation-related. For this I have a number of proof points. A simple proof point is the page does not do any visual rendering that is much different from any of the other 100+ pages on the site which have no performance issue.
The 2nd proof point can be explained by understanding what is going on in this specific page. What happens is this:
The system will detect the current user's time and location. For now assume that the user actually allows location sharing. Using a simple alert, I've been able to proof that location detection is not the bottleneck.
Based on the user's time and location, the daylight periods are calculated. This is done by using the Suncalc JS library (https://github.com/mourner/suncalc).
The Suncalc library returns an array of daylight periods for the given date and location. I render that array as a table with colored background rows. That is all.
Rendering a table with 12 rows and different background colors is not likely to cause such enormous issues. My theory therefore lies in step 2 being the root cause. The Suncalc library has a lot of advanced math in it. I am thinking (without evidence yet) that either my mobile processor is horrible at those kind of operations, and/or the specific calculation for some reason cause a peak in memory usage (or even a leak).
As an additional proof point: once the page is loaded on mobile, use the right arrow next to the date to navigate to "tomorrow". Again you will see the extremely bad performance. During that step, there is no network activity, no location detection, nothing, just calculations and some very simple rendering. This validates my theory that perhaps the issue lies in the calculation.
Sadly, it looks like native Javascript profilers on that platform are non-existent. You may also want to try the Javascript Microtime function referenced in this answer. You will need to seed your script with calls at points where you think the bottleneck might be.
Just ran this through Chrome remote debugger (https://developers.google.com/chrome-developer-tools/docs/remote-debugging) on my S3, and it looks like Modernizr's cancelZoom function (showing up in jd3_0006.js) is getting called recursively too many times or by too broad a selector. I've uploaded the profiles into dropbox: https://www.dropbox.com/s/kubxk44smm6qqkx/jungledragon_debug..zip
You can import them into Chrome's debugger on the "Profiles" tab.
I believe your performance problem centers around the use of navigator.geolocation.getCurrentPosition() in your runMap() function
if (urlDate != null) {
urlPos(latitude,longitude);
} else {
if (navigator.geolocation) {
$(".img-loading").show(100);
navigator.geolocation.getCurrentPosition(successPos, errorPos{maximumAge:600000,timeout:10000});
} else {
errorPos('');
}
}
Consider using watchPosition() instead with a callback which will not halt processing of the script thread. You can cancel the watchPostion() update by using clearWatch()
So I've played with this some more, and ran the "Timeline" feature on Chrome (load this file into your chrome timeline tool: https://www.dropbox.com/s/2vpl6z1ntuk3aqj/TimelineRawData-20140328T105820.json), and it looks like this might be your main problem.
Your scripts and libs (including loading Google Maps and jQuery) are getting evaluated AFTER parsing the HTML and running Google Analytics because they are at the bottom of the body, not head. Unless you have a very good reason to do that, I would recommend moving those to the head.
There seems to be a separate problem with scrolling, but perhaps it will be resolved by this change.

SilverStripe CMS times-out when changing pages in the CMS

I have installed SilverStripe on several servers successfully in the past (but I'm not a SilverStripe expert). This time my SS install fails to work and I'm at a loss how to fix it.
The Problem
SilverStripe 2.4.6 installed correctly on the server (AFAIK).
The front-end works as expected. (Show default theme. Pages all load correctly.)
I am able to log into the CMS admin section succesfully. The CMS loads but when changing site pages in the CMS using the browser pane on the left, the CMS shows the circular loading symbol. The new page load never completes.
Using the console of Firebug in Firefox - When attempting to change pages in the CMS (by clicking on the page browser pane) the CMS tries to load two pages. The second page request 404s.
The first GET request is from the initial page loads.
The following POST+GET requests fire when clicking on the page tree to change pages.
Attempting to Find the Solution
I've tried deleting and re-installing silverstripe twice. (2.4.7 and 2.4.6) Both times the problem recurs.
A strange thing is that this server is already running two other silverstripe sites (both of which I installed without a hitch). All three websites are accessed via different domains. I tried accessing this install via another domain thinking there might be something wrong with how this third domain is configured but that didn't help either.
What should I try now? I'm stumped.
Thanks in advance.
Responses to Comments
Check your root .htaccess file. Make sure RewriteBase is set to /
Checked. Full .htaccess on PasteBin
Indeed the javascrip URL is strange. Check if there is anything unusual about what's being returned from the previous POST request. Is the site running in dev, test or live mode?
I can't see anything unusual in the POST request.
Clue Found: The site is running in DEV mode. Switching to LIVE mode and the problem disappears. Also the second GET request only shows up in DEV mode.
Example Post request with response.
Example Get request with respones.
This is a work around more than a fix but if you'd rather be coding than bug hunting it might be worth a go! (remember to log out of SS before doing this fix)
In your mysite/_config.php file change
Director::set_environment_type("dev");
to
if(!isset($_GET['isDev']))
Director::set_environment_type("dev");
else
Director::set_environment_type("live");
Then you can develop the website in dev mode normally and to use the admin in live mode and avoid the bug you just go to: http://{your_domain}/admin?isDev=0
N.B. might find a proper answer when pastebin.com isn't overloaded and I can see your responses!

Facebook struggles to scrape one domain

I have already checked out this question, and it sounds like he's describing the same exact problem as me except for a few things:
I'm not running on https
80% of the time I try to debug, I get this message " Error parsing input URL, no data was scraped."
The scraper works perfectly on a different domain, but same server, same theme with almost identical content. Every time I try a domain it scrapes it perfectly including the image
During the 20% that it actually scrapes my page, I am having the same issue in the above link. It is reading my thumbnail, yet showing a blank image. The link brings me to a working image but it doesn't want to show anything.
The weird part is it worked completely fine about 10 months ago when I updated this blog on a daily basis. The only difference is I've switched servers recently. While that would explain a possibility, the other domain switched as well and doesn't have this problem.
I am at a loss why my links either show no image at all in facebook or give me the:
Domain Link
Domain
(no image, no description)
Very frustrating situation. Does anyone have any suggestions?
Update:
I have 6 domains...
When I moved servers recently, I found the new server wasn't prepared to compress the pages, so my blog posts looked crazy. This forced me to turn compression 'off' on WP Super Cache on my main blog. I also did it to my 2nd highest traffic blog figuring I'd get to the other 4 later.
Well, now those first two blogs appear to work fine in the facebook debugger, but the remaining 4 have troubles. The tricky part is, I completely removed WP Super Cache from one site and still had trouble fetching the data.
So while it seems logically it should have been a WP Super Cache issue, continuing to have errors despite removing it leads me to believe now? I'm still so baffled.
Update:
Ok, I loaded Chrome and IE, and both were able to pull the data with ease. The google snippet tool also worked great. I am going to try posting a link to my facebook fan page via chrome and see if it works correctly.
I did clear my FF cache and it didn't change, but I am still confused why one domain works ok while the other does not. Either way, if adding in Chrome works, I'll stick with that for now.
Any other suggestions?
Cache should not make any problem. If a browser can see your page, so can facebook debugger.
See if some 500 error is there. Try from different browser, clearing the browser cache etc. Try google rich snippet and see if a custom search engine is scrapping it fine.
PS: It will be nicer if you post url.

DropDownList postback never finishes on iPad

I've seen several posts about DropDownLists getting cleared, or events not getting fired, but they don't seem to match this situation.
I've got (well I've reduced the problem to) a very simple asp.net website, a master page with a content page. The content page has a single DropDownList with AutoPostback set to True. The code behind updates a Label with the list's selected value. Not using UpdatePanel or AJAX (though I tried using them and I get exactly the same results). It's an intranet site using Windows authentication.
It works fine on IE and Chrome, but every time I try it on my iPad it just sits and spins. The postback appears to be happening, but either nothing's coming back (or being accepted) from the server, or the client just doesn't know how to finish things up, or I don't know what.
Sorry if this seems vague but I've spent two hours on Google and haven't come up with anything other than the fact that a simple page like this should work fine on an iPad, so I'm a little punchy.
Anybody got any pointers or ideas?
EDIT: Running this page through the remote web access portal my company uses, it works fine. So this may be an authentication problem between the iPad and IIS.
Not sure I have an answer but do you have the issue if you remove the DropDownList? If you need to build the list based on data maybe you could use a asp:repeater and build a html select list.

Need help with inconsistent loading iframe

I'm trying to include a Facebook share iframe on a site that's served using Flask and Apache. The iframe loads inconsistently however and I am at a loss for possible explanations. Here is what I have observed:
The iframe loads correctly in Firefox and Safari but not Chrome 10.0 dev, on Mac
In Chrome, the iframe never loads correctly when I load the entire page
If I strip half of the elements from the page, the iframe loads correctly maybe three times out of ten - doesn't matter which half I remove.
If I strip all of the elements from the page, the iframe loads correctly every time.
The inconsistent behavior makes me think there's some sort of race going on, but I don't understand what the problem would be, or why it would only appear in Chrome. Anyway, I appreciate your help. You can view the site here. Thanks, Kevin
I think they key is in this statement:
If I strip half of the elements from
the page, the iframe loads correctly
maybe three times out of ten - doesn't
matter which half I remove.
I'd dump the output to a text file and would run tidy(1) or xmllint(1) over the response to see if you have a mis-matched HTML tag. Chances are Chrome is not handling the error correctly, but Firefox and Safari are able to recover.