ember.js route render: dom manipulation with setTimeout()? - dom

if i wanted to call a jquery plugin (which inserts a table into the dom, for example) after a view has been rendered, are there a possibilities except doing this with window.setTimeout()?
This code does the job (1ms timeout; thats weird):
Route.HomeRoute = Ember.Route.extend({
renderTemplate: function() {
this.render("home"); // render the home view
window.setTimeout(function() {
$(".tables").insertTables(); // this would add a table
}, 1);
}
})
But this code doesn't work:
Route.HomeRoute = Ember.Route.extend({
renderTemplate: function() {
this.render("home"); // render the home view
$(".tables").insertTables(); // this would add a table
}
});
I know that there's Ember.View.didInsertElement(), but therefore i have to set a callback on the parent View, and was just wondering, why the code example above doesn't work as expected.
Many thanks!

I don't know wether my answer is 100% accurate, but i guess this is how it can be explained:
I guess the problem is, that you think that the render() method does its job synchronously. But this is not the case. Instead it schedules the rendering of the HomeView. The rendering is scheduled into the RunLoop of Ember. In the second example your code schedules the rendering and then immediately tries to access its DOM Elements (i guess .tables is part of the home template). But the View is not rendered at this time! The first example works, because there is 1ms timeout involved. During this timeout the Ember RunLoop will kick in and starts its magic. It performs the rendering and afterwards when the CPU is free again, your timeout function can be called.
Actually what you want to do here is: do something on my DOM, when the View was successfully rendered. This can be done in Ember, without the use of setTimeout. The following code accesses the RunLoop and schedules a function to be performed at the end of the RunLoop.
Ember.run.next(function() {
$(".tables").insertTables(); // this would add a table
});
Here is an article on the RunLoop, which is important to understand, if you want to understand those details of Ember:
- Article by machty
Last but not least: It seams totally awkward to do such DOM Manipulation in the Route. Your Route should always be free, of such things. Element and Selectors and jQuery Plugins should only be used in the View layer. Everything else seems bad. Maybe you want to share more details about why you chose this approach? There is likely a better solution that this one.

The reason why the second example doesn't work is probably due to the Ember.js Run Loop. this.render schedules the dom insertion for later in the current run loop.
DOM insertion is done at the end of the run loop, and by using setTimeout, you are calling the plugin after the run loop ends, therefore guaranteeing that the template was injected into the DOM. (no need for the 1ms, 0ms would probably work)
You might say this run loop thing is very complicated, especially for an Ember.js beginner. The thing is, ideally, it is supposed to be transparent to the app developer. The reason why you are encountering its side effects, is because DOM manipulation should not be handled in the router.
My first reaction was to tell you to use didInsertElement, or any code or hook inside the View, because that's where DOM manipulation should happen. But it seems you are aware of that and cannot use it for some reason (which I can't confirm or deny because I don't have enough information).
My advice to you, try your best to do it in didInsertElement.

Related

How to tell TinyMCE UndoManager to ignore changes until explicitly notified to resume?

Is it possible to use TinyMCE's UndoManager.ignore() when the callback is an asynchronous process?
What I am looking for is a way to "start ignoring" and a way to "stop ignoring".
(The background is that I have an async post-process that modifies the editor content, but I don't want those modifications to be part of the Undo/Redo stack, since they are not user-generated.)
This doesn't work, because the ignore() block callback finishes promise is resolved:
editor.undoManager.ignore(function() {
doAsyncProcess(editor).then(function() {
// doesn't work
});
}
What I want is something like this:
editor.undoManager.startIgnoring();
doAsyncProcess(editor).then(function() {
editor.undoManager.stopIgnoring();
});
but of course those APIs do not exist. Is there a workaround for this?
What I am looking for is a way to "start ignoring" and a way to "stop
ignoring".
It is hardly implementable. Mainly, because it may break something. Imagine the situation when something outside your process that needs a new undo level happened during that 'ignore time'.
Generally, all editor content operations within TinyMCE need to be synchronous. So the solution there is to normally get all the data needed asynchronously and then apply the update once it’s all been fetched.

Understanding Chrome Dev Tools timeline

I'm trying to understand why I have several Long Frames reported by Chrome Dev Tools.
The first row (top of the call stack) in the flame chart are mostly Timer Fired events, triggered by jQuery.Deferred()s executing a bunch of $(function(){ }); ready funcs.
If I dig into the jQuery source and replace their use of setTimeout with requestAnimationFrame the flame chart doesn't change much, I still get many of the rAFs firing within a single frame (as reported by dev tools) making long frames. I'd have expected doing the below pseudocode:
window.requestAnimationFrame(function() {
// do stuff
});
window.requestAnimationFrame(function() {
// do more stuff
});
to be executed on two difference animation frames. Is this not the case?
All of the JS that is executing is necessary, but what should I do to execute it as "micro tasks" (as hinted at, but not explained here https://developers.google.com/web/fundamentals/performance/rendering/optimize-javascript-execution) when setTimeout and rAF don't seem to achieve this.
Update
Here's a zoomed in shot of one of the long frames that doesn't seem to have any reflows (forced or otherwise) in it. Why are all the rAF callbacks here being executed in one frame?
Long frames are usually caused by forced synchronous layouts, which is when you (unintentionally) force a layout operation to happen early.
When you write to the DOM, the layout needs to be reflowed because it has been invalidated by the write operation. This usually happens at the next frame. However, if you try to read from the DOM, the layout happens early, in the current frame, in order to make sure that the correct value gets returned. When forced layout occurs, it causes long frames, leading to jank.
To prevent this from happening, you should only perform the write operations inside your requestAnimationFrame function. The read operations should be done outside of this, so as to avoid the browser doing an early layout.
Diagnose Forced Synchronous Layouts is a nicely explained article, and has a simple example demo for detecting forced reflow in DevTools, and how to resolve it.
It might also be worth checking out FastDom, which is a library for batching your read and write. It is basically a queuing system, and is more scalable.
Additional Source:
What forces layout / reflow, by Paul Irish, contains a comprehensive list of properties and methods that will force layout/reflow.
Update: As for the assumption that multiple requestAnimationFrame calls will execute callbacks on separate frames, this is not the case. When you have consecutive calls, the browser adds the callbacks to a document list of animation callbacks. When the browser goes to run the next frame, it traverses the document list and executes each of the callbacks, in the order they were added.
See Animation Frames from the HTML spec for more of the implementation details.
This means that you should avoid using the consecutive calls, especially where the callback function execution times combined exceed your frame budget. I think this would explain the long frames that aren't caused by reflow.

do the jquery(selector,...) overloads wait for the dom to finish loading?

The jquery(callback) docs clearly state that it waits for the DOM to finish loading before running the function. [ref: http://api.jquery.com/jQuery/#jQuery3]
The jquery(selector, ...) docs on the other hand, seem unclear as to whether the DOM will be finished loading by the time the selector runs.
So, and here is the real question: please can someone tell me whether I really need to nest all my selectors inside of a jquery(callback) like I am currently doing?
jquery(function() { jquery(selector).dostuff(); })
(or $(function() { $(selector).dostuff(); }) which is the same)
The jQuery(callback) overload is a shorthand for jQuery(document).ready(callback), so it will run the code in the callback function when the document has been parsed.
The jQuery(selector, ...) is not a shorthand for any event binding, it will return the elements matched by the selector at the moment that the code runs.
A method that doesn't use a callback is actually not able to wait until the document has been parsed. If the method would just wait for the document to be finished, that would never happen. While the Javascript code is running, the browser doesn't continue to parse the document.
No, jQuery(selector) does not wait for the DOM to finish loading. jQuery(callback) is just a shorthand for jQuery(document).ready(callback), and the ready event represents the DOM being loaded.
You only to wrap your code when you need the current page (in the DOM) to be available when your code runs.
It says in the jQuery docs that the callback function wait's for the DOM to load, when you select a element you do not have a callback function.
This has a callback function.
jQuery(function() {
});
This doesn't
jQuery('#element');

What's a good maintainable way to name methods that are intended to be called by IBActions?

I am creating function (for example) to validate content, then if it is valid, close the view, if it is not, present further instructions to the user. (Or other such actions.) When I go to name it, I find myself wondering, should I call it -doneButtonPressed or -validateViewRepairAndClose? Would it be better to name the method after what UI action calls it, or name it after what it does? Sometimes it seems simple, things like -save are pretty clear cut, other times, and I can't thing of a specific example right off, but I know some have seemed like naming them after what they do is just so long and confusing it seems better to just call them xButtonPressed where x is the word on the button.
It's a huge problem!!! I have lost sleep over this.
Purely FWIW ... my vote is for "theSaveButton" "theButtonAtTheTopRight" "userClickedTheLaunchButton" "doubleClickedOnTheRedBox" and so on.
Generally we name all those routines that way. However .. often I just have them go straight to another routine "launchTheRocket" "saveAFile" and so on.
Has this proved useful? It has because often you want to launch the rocket yourself ... in that case call the launchTheRocket routine, versus the user pressing the button that then launches the rocket. If you want to launch the rocket yourself, and you call userClickedTheLaunchButton, it does not feel right and looks more confusing in the code. (Are you trying to specifically simulate a press on the screen, or?) Debugging and so on is much easier when they are separate, so you know who called what.
It has proved slightly useful for example in gathering statistics. The user has requested a rocket launch 198 times, and overall we've launched the rocket 273 times.
Furthermore -- this may be the clincher -- say from another part of your code you are launching the rocket, using the launch-the-rocket message. It makes it much clearer that you are actually doing that rather than something to do with the button. Conversely the userClickedTheLaunchButton concept could change over time, it might normally launch the rocket but sometimes it might just bring up a message, or who knows what.
Indeed, clicking the button may also trigger ancillary stuff (perhaps an animation or the like) and that's the perfect place to do that, inside 'clickedTheButton', as well as then calling the gutsy function 'launchTheRocket'.
So I actually advocate the third even more ridiculously complicated solution of having separate "userDidThis" functions, and then having separate "startANewGame" functions. Even if that means normally the former does almost nothing, just calling the latter!
BTW another naming option would be combining the two... "topButtonLaunchesRockets" "glowingCubeConnectsSocialWeb" etc.
Finally! Don't forget you might typically set them up as an action, which changes everything stylistically.
[theYellowButton addTarget:.. action:#selector(launchRockets) ..];
[theGreenButton addTarget:.. action:#selector(cleanUpSequence) ..];
[thatAnimatingButtonSallyBuiltForUs addTarget:.. action:#selector(resetAll) ..];
[redGlowingArea addTarget:.. action:#selector(tryGetRatingOnAppStore) ..];
perhaps that's the best way, documentarily wise! This is one of the best questions ever asked on SO, thanks!
I would also go with something along the lines of xButtonPressed: or handleXTap: and then call another method from within the handler.
- (IBAction)handleDoneTap:(id)sender {
[self closeView];
}
- (void)closeView {
if ([self validate]) {
// save and close
}
else {
// display error information
}
}

Will inserting the same `<script>` into the DOM twice cause a second request in any browsers?

I've been working on a bit of JavaScript code that, under certain conditions, lazy-loads a couple of different libraries (Clicky Web Analytics and the Sizzle selector engine).
This script is downloaded millions of times per day, so performance optimization is a major concern. To date, I've employed a couple of flags like script_loading and script_loaded to try to ensure that I don't load either library more than once (by "load," I mean requesting the scripts after page load by inserting a <script> element into the DOM).
My question is: Rather than rely on these flags, which have gotten a little unwieldy and hard to follow in my code (think callbacks and all of the pitfalls of asynchronous code), is it cross-browser safe (i.e., back to IE 6) and not detrimental to performance to just call a simple function to insert a <script> element whenever I reach a code branch that needs one of these libraries?
The latter would still ensure that I only load either library when I need it, and would also simplify and reduce the weight of my code base, but I need to be absolutely sure that this won't result in additional, unnecessary browser requests.
My hunch is that appending a <script> element multiple times won't be harmful, as I assume browsers should recognize a duplicate src URL and rely on a local cached copy. But, you know what happens when we assume...
I'm hoping that someone is familiar enough with the behavior of various modern (and not-so-modern, such as IE 6) browsers to be able to speak to what will happen in this case.
In the meantime, I'll write a test to try to answer this first-hand. My hesitation is just that this may be difficult and cumbersome to verify with certainty in every browser that my script is expected to support.
Thanks in advance for any help and/or input!
Got an alternative solution.
At the point where you insert the new script element in the DOM, could you not do a quick scan of existing script elements to see if there is another one with the same src? If there is, don't insert another?
Javascript code on the same page can't run multithreaded, so you won't get any race conditions in the middle of this or anything.
Otherwise you are just relying on the caching behaviour of current browsers (and HTTP proxies).
The page is processed as a stream. If you load the same script multiple times, it will be run every time it is included. Obviously, due to the browser cache, it will be requested from the server only once.
I would stay away from this approach of inserting script tags for the same script multiple times.
The way I solve this problem is to have a "test" function for every script to see if it is loaded. E.g. for sizzle this would be "function() { return !!window['Sizzle']; }". The script tag is only inserted if the test function returns false.
Each time you add a script to your page,even if it has the same src the browser may found it on the local cache or ask the server if the content is changed.
Using a variable to check if the script is included is a good way to reduce loading and it's very simple:
for example this may works for you:
var LOADED_JS=Object();
function js_isIncluded(name){//returns true if the js is already loaded
return LOADED_JS[name]!==undefined;
}
function include_js(name){
if(!js_isIncluded(name)){
YOUR_LAZY_LOADING_FUNCTION(name);
LOADED_JS[name]=true;
}
}
you can also get all script elements and check the src,my solution is better because it hase the speed and simplicity of an hash array and the script src has an absolute path even if you set it with a relative path.
you may also want to init the array with the scripts normally loaded(without lazy loading)on the page init to avoid double request.
For what it's worth, if you define the scripts as type="module", they will only be loaded and executed once.