Multiple QueryClientProvider in one project - react-query

I wonder whether using multiple QueryClientProvider in a single react project is a good approach (or whether it even works e.g. with react query devtools). My colleague who is developing other module, added react-query to deps and wrapped this module with QueryClientProvider (instead of wrapping whole app), I haven't come across this method at all - I want to use react-query in future modules and refactor existing ones, but I'm not sure about the place where provider is declared.

Related

Dagger2 - should an activity component be a sub-component of other?

I used #subcomponent mostly for case activities need to use some shared objects from application component, or fragments components want to use some objects provided by container activity.
Now I am wondering if I can make some activity components be subcomponent of another activity component. For example, the TaskDetailActivity has a task object and want to provide to some other activities such as TaskParticipantActivity, TaskProgressActivity and some fragments.
The traditional way to provide task object to other activities is set it into intent object, but how about if we want to use Dagger2 for this case?
Update: my sistuation similar with the case of UserScope in this article http://frogermcs.github.io/dependency-injection-with-dagger-2-custom-scopes/, but instead of saving the user component in Application class, can I save in an activity, i.e TaskDetailActivity?
Components are for grouping objects of a similar lifecycle. While Components may happen to correspond to a particular set of functionality (like a TaskComponent for injecting a TaskActivity and a TaskPresenter) it is not always possible or desirable to insist on only one Component per set of functionality (for instance, insisting on only one TaskComponent for all task related activities and dependencies).
Instead, in Dagger 2 re-usability is available through Modules which you can very easily swap in and out of Components. Within the constraints of the advice for organising your Modules for testability in the Dagger 2 official documentation you are able to organise Modules congruent with your functionality (e.g., a TaskModule for all-task related dependencies). Then, because Components are so lightweight, you can make as many as you like to deal with the different lifecycles of your Activities and so on. Remember also that you can compose Modules using the Class<?> [] includes() method inside the Module #interface.
In your particular scenario, you want to share the Task object from a TaskDetailActivity. If you held a reference to the Task within your TaskDetailActivity then that reference will no longer be available when TaskDetailActivity is destroyed. While you could try some solution of holding binding the Task in a Module and then maintaining a reference to that Module at the app-scope level, you would essentially be doing the same as the UserScope at the app-scoped level in the article you have linked. Any kind of solution for sharing the Task object between Activity using Dagger 2 would necessarily involve maintaining a reference to the object at the app-scoped level.
Using Dagger 2 doesn't mean that the new keyword or serialization/deserialization of Parcelables is now wrong and so if your first intuition is to use Intent to communicate then I would say that you are right. If you need a more robust solution than directly communicating the Task, then you could extract a TaskRepository and transmit an Intent between Activity that contains the id of the Task you wish to retrieve. Indeed, some of the Google Android Architecture Blueprints have a solution just like this.

How to use modules in dagger-2

I can't seem to grasp the modules of dagger.
Should I create a new instance of a module each time I want to inject stuff?
Should I create only one instance of a module? If so where should I do it?
Is there a more complex example of fragments and activities used with dagger?
Thanks
You should think more about #Component than #Module. Modules just create objects that need further initialization. The actual work happens in Components, which modules are part of.
Should I create a new instance of a module each time I want to inject stuff?
You should create your module when you create the Component it is part of, since only this component is going to need it. If you find yourself creating the same module multiple times, you are most likely doing something wrong.
A module uses additional arguments (pass them in via the constructor) to create more complex objects. So if you were to have e.g. a UserModule you'd pass in the a user to create user dependent objects from the resulting component. If the user changes lose the old component and create a new module and a new component—the old objects should not be used anymore.
Keep the component where / when appropriate and be sure to use Scopes, since they determine the lifetime of your component.
Should I create only one instance of a module? If so where should I do it?
You most likely will just create a single instance of #Singleton annotated Components and Modules. In android you'd most likely keep the reference to the component (not the module!) in the Application or some real 'singleton'.
Is there a more complex example of fragments and activities used with dagger?
Try googling. There are lots of high quality tutorials with linked github repositories that go into much more depth and detail as would be possible here on SO. e.g. see Tasting dagger 2 on android.

Dart and Live plugins

So one of the neat features of languages like PHP is that you can include other files pragmatically and make a plugin-like system. I haven't seen an example of it yet, so I am not sure it is technically possible in Dart, but I would like to start designing a CMS that can load and unload plugins live without restarting or a fresh upload.
Currently it is only possible to load/unload code dynamically using Isolates.
In the browser new isolates don't have access to the DOM and it is limited which types can be passed between isolates. What can be serialized to JSON can be passed between isolates easily but for custom types you need to serialize yourself. I'm not sure about the actual limitations here though, this is work in progress.
In the browser the current limitations make it hard to make use of isolates. You can't load code into an isolate that imports 'dart:html' this prevents the use any browser API. On the server there are no such limitations.
This should all be improved but currently there are still a lot of limitations.

Is possible to make a REST Call to webscript from own Java Backed Webscript?

I'm doing a Java Backed Webscript to put in Alfresco and call it via REST. This Webscript must do a set of 3 operations (find a path, create a folder and upload a document).
I read about this and found similar examples to do this operations throw the native Alfresco API, with methods like getFileFolderService, getContentService, etc. of Repository or ServiceRegistry classes. All in Java, without javascript.
But I would rather use REST calls instead of Alfresco API inside my Webscript. I think that if already exists Webscripts to do these operacions, is easier call them than use Alfresco API methods to try to do it. And if the API changes in future versions, the REST calls would remain the same. But I'm new here and I don't know if I'm wrong.
In summary: to do these 3 operacions, one after another, in my backed webscript, what is better and why? Use native API methods or use REST calls to existing webscripts?
And if I try to do the second option, is possible to do this? Using HttpClient class and GetMethod/PostMethod for the REST calls inside my Java Webscript may be the best option for Rest calls?. Or this could give me problems? Because I use a Rest call to my backed webscript that do another rest calls to another webscripts.
Thanks a lot!
I think it's bad practice to do it like this. In a lot of Alfresco versions the default services didn't change a bit. Even when they changed they still had deprecated methods.
The rest api changed as well. If you want to make an upgrade proof system I guess it's better to stick with the Webservices (which didn't change since version 2.x) or go with CMIS.
But then it doesn't make sense to have your code within Alfresco, so putting it within an interface is better.
I'd personally just stick with the JavaScript API which didn't change a lot. Yes more functions were enabled within, but the default actions to search & CRUD remained the same.
You could even to a duo: Have your Java Backendscript do whatever fancy stuff and send the result to je JavaScript controller and do the default stuff.
Executing HTTP calls against the process you are already in is a very very bad idea in general. It is slower, much more complex and error-prone, hogs more resources (two threads), and in your case, you will even lose transaction safety. Just imagine the last call fails for some reason. Besides you will most likely have to handle security context propagation yourself. Use the native public API and it will be easy, safe and stable.

manipulating other modules functionality in zend framework 2

How can I manipulate other modules without editing them ? very the same thing that wordpress modules do .
They add functionality to core system without changing the core code and they work together like a charm.
I always wanted to know how to implement this in my own modular application
A long time ago I wrote the blog post "Use 3rd party modules in Zend Framework 2" specifically about extending Zend Framework 2 modules. The answer from Bez is technically correct, it could be a bit more specific about the framework.
Read the full post at https://juriansluiman.nl/article/117/use-3rd-party-modules-in-zend-framework-2, but it gives you a clue about:
Changing a route from a module (say, you want to have the url /account/login instead of /user/login)
Overriding a view script, so you can completely modify the page's rendering
Changing a form object, so you could add new form fields or mark some required field as not required anymore.
This is a long topic, but here is a short gist.
Extensibility in Zend Framework 2 heavily relies on the premise that components can be interchanged, added, and/or substituted.
Read up on SOLID principles: http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
Modules typically consists of objects working together as a well-oiled machinery, designed to accomplish one thing or a bunch of related things, whatever that may be. These objects are called services, and managed by the service locator/service manager.
A big part of making your module truly extensible is to expect your developers to extend a class or implement a certain interface, which the developer register as services. You should provide a mode of definition wherein the developers can specify which things he wants to substitute, and/or add their own services to -- and this is where the application configuration comes in.
Given the application configuration, you should construct your machinery a.k.a. module services according to options the developer has specified i.e., use the developer defined Foo\Bar\UserService service as the YourModule\UserServiceInterface within your module, etc. (This is usually delegated to service factories, which has the opportunity to read the application configuration, and constructs the appropriate object given a particular set of configuration values.)
EDIT:
To add, a lot can be accomplished by leveraging Zend's Zend\EventManager component. This allows you to give developers the freedom to hook and listen to certain operations of your module and act accordingly (See: http://en.wikipedia.org/wiki/Observer_pattern)