It's well known that you must manipulate DOM elements inside directives when using AngularJS.
However, it seems that, in some use cases, manipulating DOM inside a service is acceptable.
Misko Hevery is talking about this here. You can also find an example within the Bootstrap UI Dialog.
Misko's explanation is rather vague so I was wondering how do you determine when you need to put DOM inside a service instead of a directive.
A directive, with the way it is defined, is always attached to a DOM node. So when you define a directive, it "expands" or replaces the DOM node to which it is attached.
In certain situations (like dialogs) you won't be able to attach DOM nodes to any specific parent. In these cases using a service makes sense and the controller can still stay out of the DOM bit because the DOM manipulation will be encapsulated in a service..
Popups could be another situation where we could probably use a service, but unlike a dialog, a popup IS attached to a DOM node. So, even that is slightly a grey area.
So, a basic and simple test is, "Can this bit of DOM Manipulation code be attached to a DOM node?" If yes, then directive. If no, then service.
Dialogs and Custom Confirm Boxes come in as typical examples where you would use a service.
Whilst I think Ganaraj has described what Misko was saying well, you could (and potentially should) argue that the DOM manipulation code for a modal dialogue (for example) can be put into a DOM node.
One approach is to have a dialog directive attached to the DOM the whole time, and then use ng-show to conditionally show it. You can then communicate with the modal dialog using either $rootScope, or better: a service.
I actually prefer this approach because it feels 'right' - the service handles the transfer of data and the directives interact with the service to make sure it gets displayed in a way that makes sense to the user.
But I think the fact that Misko is not particularly clear about it shows that it's quite subjective. Do what makes the most sense to you.
I'm googling this topic to reinforce my inner feeling about this because I'm working on an issue right now that makes me want to place certain logic in a service. Here's my case, and what I think is a plenty good justification for putting dom-based handling in service:
I have directive-based elements that react to mouse position, globally (e.g. they move or change in some way based off mouse position). There are an indeterminate number of these elements and they don't pertain to any specific location in the application (since it's a GUI element and can pertain to any container anywhere). If I were to adhere to the strict angular "dom logic in the directives only" rule, it'd be less efficient because the elements all share the logic pertaining to parsing the mouse position (efficiently) which revolves around window.requestAnimationFrame ticks.
Were I to bundle that logic into the directive, I'd have a listener/raf loop tied to every single instance. While it'd still be DRY, it wouldn't be efficient since on every move of the mouse you'd be firing the exact same listener that would return the exact same result for every single element.
It's actually best in this case to move this into a service, despite it being dom-based logic, and register each directive instance against the service to call the same, instance-based logic against the logic performed that would be duplicate for each instance.
Remember, while Angular provides some very good advice around how to structure you code, that does not make it bullet proof of by any means a hard and fast rule, since it can't possibly cover all use cases. If you see a hole where the "best practices" seem to fail, its because you're actually properly understanding the best practices, and you've now found a reason to break the rules on purpose.
That's just my 2 cents!!
I agree with #dudewad. At the end of the day a service (factory, provider, value) is just angular's module pattern with the limitation of being implemented as a singleton. I think it's important that you get access to the DOM via an element that is passed into a directive's link function and not by using document or other global approaches. However, I don't think that it's important that the logic that you apply to dom element that you get from a directive lives in the same module. For SRP reasons it can be favorable to break up the code a bit using a service as you might have a particularly complex piece of logic that it makes more sense to have focused test around or you might want to use the logic in more than one directive as pointed out by #dudewad.
one drawback to using a DOM manipulation method based off of changing a variable (ie, ng-show="isVisible") is that the DOM manipulation occurs after the next "javascript turn" loop (when isVisible is updated). You may need the DOM to be updated right away.
For example, a common scenario is displaying a "spinner" during transitions to a new route / state. If you were to set $scope.isVisible = true on the $routeChangeStart / $stateChangeStart event, and then $scope.isVisible = false on the $routeChangeSuccess / $stateChangeSuccess event, you will never see your ng-show, as the whole route / state change happens within one javascript turn. It would be better to use .show() and .hide() in those events so that you actually see the spinner.
to bring this all back and make it relevant to the OP's question -- in a situation where the DOM manipulation is a "spinner" modal being displayed, I would do it during the service, and I would do it with direct DOM manipulation methods, rather than relying on a model change.
Related
I recently started developing using akka event sourcing/cluster sharding, and thanks to the online resources I think I understood the basic concepts and how to create a simple application with it. I am however struggling to apply this methodology in a slightly more complex data structure:
As an example, let's think about webpages and URLs.
Each Page can be represented with an actor in the cluster (having its unique id as the path of the page, e.g. /questions/60037683).
On each page I can issue commands such as
Create page (so if the page does not exist, it will be created)
Edit page (editing the details of the page)
Get page content (and children)
Etc.
When issuing commands to single pages, everything is easy as it's "written on the manual". But I have the added the complexity that a web page can have children, so when creating a "child page" I need the parent to update references to its children.
I thought of some possible approaches, but they feel incomplete.
Sending all events to the single WebPage and when creating a page, finding the parent page (if any) and communicate that a new child has been added
Sending all events to the single WebPage, and when creating a page, the message is sent to the parent, and then it will create a new command that will tell the child to initialize
Creating an infrastructure as WebPageRepository that will keep track of the page tree and will relay CRUD commands to all web page actors.
My real problem is, I think, handling the return of Futures properly when relaying messages to other actors that have to actually perform the job.
I'm making a lot of confusion and some reading resources would be greatly appreciated.
Thanks for your time.
EDIT: the first version was talking about a generical hierarchical file-system-like structure. I updated with the real purpose, webpages and urls and tried to clarify better my issues
After some months of searching, I reached the conclusion that what I'm doing is trying to have actors behave transactionally, so that when something is created, the parent is also updated in a safe manner, meaning that if one operation fails, all operations who completed successfully are rolled back.
The best pattern for this, in my opinion, proved to be the saga pattern, which adds a bit of complexity to the whole process, but in the long run it does what I needed.
Basically I ended up implementing the main actor as a stand alone piece (as it should be) that can receive create commands and add children commands.
There is then a saga actor which takes care of creating the content, adding the child to the parent and rolling back everything if something fails during the process.
If someone else has a better solution, I'll be glad to hear it out
I'm using an Autofac container for the entire lifetime of my application, but I want to dispose the components myself.
I.E if I have builder.RegisterType<SomeType>(), I don't want the container to keep references of SomeType which will keep those alive even if not referenced anywhere else (if RegisterInstance is used OTOH, then of course the container must keep a reference to the singleton).
I can see that I can do builder.RegisterType<SomeType>().ExternallyOwned() which solves my problem for one type, but I don't want to write it for every type, and more importantly I also use builder.RegisterSource(new AnyConcreteTypeNotAlreadyRegisteredSource()); which doesn't give me the option of using ExternallyOwned.
Is there a way to specify "ExternallyOwned" for the entire container? Or, to put it another way, tell the container to disable the entire dispose feature and not keep references for objects it doesn't need?
There is not a way to disable container disposal services. You might try to hook in with something like the logging module but I could see that not working 100% and missing edge cases of you're not careful.
Automatic tracking and disposal is a pretty common container feature. I'd recommend instead of fighting it you refactor your code to embrace it. It'll make life a lot easier.
In the everyday front-end development I often use DOM as a global event bus that is accessible to every part of my client-side application.
But there is one "feature" in it, that can be considered harmful, in my opinion: any listener can prevent propagation of an event emitted via this "bus".
So, I'm wondering, when this feature can be helpful. Is it wise to allow one listener to "disable" all the other? What if that listener does not have all information needed to make right decision about such action?
Upd
This is not a question about "what is bubbling and capturing", or "how Event.stopPropagation actually works".
This is question about "Is this good solution, to allow any subscriber to affect an event flow"?
We need (I am talking about current usage in JS) stopPropagation() when we want to prevent listeners to interfere with each other. However, it is not mandatory to do so.
Actual reasons to avoid stopPropagation:
Using it usually means that you are aware of code waiting for the same event, and interfering with what the current listener does. If it is the case, then there may (see below) be a design problem here. We try to avoid managing a single thing at multiple different places.
There may be other listeners waiting for the same type of event, while not interfering with what the current listener does. In this case, stopPropagation() may become a problem.
But let's say that you put a magic listener on a container-element, fired on every click to perform some magic. The magic listener only knows about magic, not about the document (at least not before its magic). It does one thing. In this case, it is a good design choice to leave it knowing only magic.
If one day you need to prevent clicks in a particular zone from firing this magic, as it is bad to expose document-specific distinctions to the magic listener, then it is wise to prevent propagation elsewhere.
An even better solution though might be (I think) to have a single listener which decides if it needs to call the magic function or not, instead of the magic function being a stoppable listener. This way you keep a clean logic while exposing nothing.
To provide (I am talking about API design) a way for subscribers to affect the flow is not wrong; it depends on the needs behing this feature. It might be useful to the developers using it. For example, stopPropagation has been (and is) quite useful for lots of people.
Some systems implement a continueX method instead of stopX. In JavaScript, it is very useful when the callees may perform some asynchronous processing like an AJA* request. However, it is not appliable to the DOM, as the DOM needs results in time. I see stopPropagation as a clever design choice for the DOM API.
I'm a beginner to the flux model but I think I understand it at a high level:
event creator -> events -> dispatch -> store -> view and around we go!
Given that the flux model supports multiple stores, if you have say an event loop that dispatches to 2+ stores, that in turn updates the same view.
How do you manage any inadvertent flicker that would come from that process?
I haven't quite enabled/used react yet (I assume a catch all answer will be that react handles this heavy lifting part of reducing this) but conceptually how could this work outside a specific implementation.
Since store changes are applied serially across stores, do you just wait until all the stores are down processing the dispatcher, and then allow them individually to fire all their changes? Even then you still would loop through and dispatch events at the end, and you'd still potentially have overlapping updates to the UI.
Thanks!
You have different options here:
The vanilla solution is to utilize a waitFor() function in your store-structure, and ensure that in the end each component has only one store it listens to. More or less like this:
Caveat is that your action types and store structures need to be in sync: Each action needs to communicate to all stores that are included in a waitFor cycle. The example in the picture will fail to trigger a render. The top-most store is not listening to the action from dispatcher, and the right store will keep waiting for update. Also, the red line may cause a similar dead end, if it is only 1 of the components. My way of dealing with this is: make all stores in the first line listen to ALL actions, and if the action is irrelevant: emit change.
The other option is to consolidate your data into a single store.
This does not make the issue go away: you need to handle the dependency issues inside the single store. But it does take away the hassle of many actions, many waitFors, and many change emissions..
Remember that the action is processed synchronously - all stores will have emitted, the controller views with have called setState, etc. before the stack unwinds and browser gets a chance to re-render the DOM, so flicker is not possible (the browser won't render in the middle of a function running, since otherwise all DOM manipulation code would cause random flickering).
However, as you say, there will potentially be multiple stores emitting changes, and multiple components listen to them, and hence you may end up calling 'setState' multiple times (even on the same component). This sounds inefficient, but under most circumstances it isn't. As long as the current action originated from an event that came from React (e.g. an event handler added to a component in the JSX), React automatically batches all calls to setState and only does the re-render to the DOM (i.e. any required DOM updates) once, immediately (and synchronously) after you have finished processing the event.
There is a case to be aware of - if you dispatch an action from something outside of a React event handler (e.g. a promise.then, an AJAX callback, setTimeout callback, etc.) then React will have to re-render for every single call to setState in that function, since it doesn't know when else to do it. You can avoid this by using the undocumented batched rendering feature (0.14, note that 0.13 had a different API for this):
ReactDOM.unstable_batchedUpdates(myFunctionThatDispatchesActions);
An alternative might be to use an off-the-shelf Flux implementation which does this for you already. See e.g. https://github.com/acdlite/redux-batched-updates
Lets take this example. I have a AView which is bound to AViewModel. AView should execute an ACommand on AViewModel, and pass it a parameter. The problem is that ViewA doesnt have enough information to pass to command, so another BView needs to be displayed, in order to gather information from the user. After the VIewB is closed, ViewA invokes ACommand on AViewModel, and passes parameter to it.
How to handle this scenario? Should I allow AView to communicate directly to BView, or I am breaking some rule if I do so?
Another way I am thinking is to invoke a ACommand on AViewModel without a parameter, then from VIewModelA send a message that a information is required to complete the task. This information is captured by MainPageViewModel, than sends a request to open BView, which is bound to BViewModel. When BView is closed, a BVIewModel sends a message with additional info, and ViewModelA has subscribed to this type of message, so it receives it, and completes the task. Pretty complicated for just entering values in two text boxes, right? :)
There are 3 golden rules to MVVM: Separation, Separation & Separation :)
Reasons include: Automated testing of components, complete separation of functionality (e.g. for independent modules), independent team development of modules (without tripping over each other) and generally just easier to figure out what does what.
In answer to your interconnecting two Views: you are adding dependencies that should not exist. Separation of concerns is more important than a little complexity (and I would argue that the messaging model is less complex than maintaining a direct interconnection).
The complexity of publishing/listening for an extra message is nothing compared to the harm of interconnecting unrelated components so your last suggestion is "better", but I would actually suggest a cleaner approach to the whole problem:
A few guidelines:
Views should not know where their data comes from, only how to display a certain shape of data. Command execution is via bindings to ICommands on the VM.
ViewModels should hold a certain shape of data and commands. It should have no idea where the data comes from or what is binding to it.
Models hold the actual data, but have no idea where it is consumed.
Controllers (often overlooked in MVVM) register/push events, populate VMs from models, set the code in ICommands, control visibility of views etc. Controllers are the only thing actually required to stay in memory at all times and they are quite slim (mostly code & little data).
Basically I would suggest adding Controllers to your MVVM pattern (MVCVM?). App/modules create and initialize controllers. Controllers subscribe to events and provide the logic of the application.
Try this pattern and see how simple it becomes to work with lots of Views, ViewModels and Models. You do not mention what language or framework you are using, so I can't make specific suggestions for examples.