When using Clean Architecture with flutter we see a diagram similar to this:
(I wrote MobX package as an example, but it could be anything like BLoC, Redux...)
I may have a problem with this diagram since the Store resides in presentation layer and is responsible for changing the state.
Imagine that the app loads a list of todos through a method called "getTodos" implementend in TodosStore.
The "getTodos" implementation might be:
_state = State.loading();
final result = GetTodos();
_state = State.done();
(I oversimplified things)
It starts by updating the state to loading, calls a use case returning the list and set state to done.
The problems I found here:
Stores are responsible for calling UC, updating state, handling errors...
Use case is simply a bridge and does not handle the business logic
This is a rather simple example.
But let's imagine that I have a view with 3 lists of differents data (it may be an absurd example).
The view interacts with 3 differents stores. There's a button in an app bar. Its goal is to clear the 3 lists.
How to achieve that behavior ?
The button onPressed method will need to call "clearData" for each store
The "clear" method of each store will simply updates its properties
The thing is that the view is not as dumb as we wanted.
The store is not even interacting with any use case
Should a ClearLists use case be legitimate here?
Since I do not like having much logic in presentation layer, I tend to follow this diagram:
Each view has its own ViewModel. The VM simply interacts with the Use Case. These UC may or may not return a value. For instance: my UC is a ValidateNumberRange, I might not interact with a store. It makes sense to return a bool. But if my UC is a ClearTodoList, it might interact with a store. The success or failure may be a stored value. That way returning a value may not be useful.
With this new diagram, the "GetTodos" use case callable method implementation might be:
store.set(State.loading());
final result = repo.getTodos();
result.fold(
(failureMsg) { store.set(State.failure(failureMsg)); },
(newList) { store.set(State.done(newList)); },
);
Use case handles the state management logic
The View calls a VM method. The VM calls a UC and observe changes in Store (through MobX here)
I ask myself a ton of questions:
Did I correctly understand the role of UC in clean architecture ?
Is changing state (to loading, done, failure) considered business logic ?
Where would you put state management solution in your app ?
I look forward to hear your thoughts
Use cases encapsulate business rules, and they are platform-agnostic and delivery-mechanism-agnostic (e.g. UI).
Use Cases are based on functionality, but without implementation details.
Book:
https://dannorth.net/introducing-bdd
changing state is responsibility of presentation layer; Presentation Layer != UI Layer
Think as like this:
Domain Layer <= infrastructure layer <= application layer <= presentation layer <= UI Layer
the dependencies always should be inward.
imagine like this: Side effects free business logic (functional core) is always in center and the rest (stateful elements, UI, frameworks) surround it.
from your diagram:
AppState and ViewModel always reside in presentation layer. Flutter specific classes are belongs to the UI.
Related
I am working with a library (ScalaJS and react specifically) where I have an interesting situation that I assume is pretty routine for an experienced reactive-programmer. I have a Component with State and a callback shouldComponentUpdate(State). The basic idea here is that certainly if the callback is triggered but the State has not changed from the last render, returnfalse. Otherwise, perhaps return true if the State change matters.
I am using a library monix but it seems identical to other reactive libraries so I would imagine this is a fairly context-independent question.
I would like to do something like: have some state that reflects the deltas of State since the last render. On each render, clear the buffer. Or, have a renderedState subject that reflects all rendered states as a sequence, a receivedState subject that reflects all received State updates, and a needsUpdate subject that reflects whether the latest receivedState matches the latest renderedState. I am having trouble actually executing these ideas, though. Here is where I am stuck at:
Here is what I've done for other callbacks:
lazy val channel_componentWillUpdate = channel_create[ComponentWillUpdate[Props, State, ResponsiveLayoutContainerBackend, TopNode]]
def componentWillUpdate(cwupd: ComponentWillUpdate[Props, State, ResponsiveLayoutContainerBackend, TopNode]) =
Callback {
channel_componentWillUpdate.onNext(cwupd)
}
So when then componentWillUpdate callback is triggered, the handler fires onNext on the channel (subject).
The shouldComponentUpdate is different though. It returns a value, so it needs to be structured differently. I am having trouble thinking of the right adjustment.
To summarize a bit:
react has callbacks at different stages of the view lifecycle, like componentDidMount, componentDidUpdate, etc.
I am handling all but one stage the same way - the shape of the callback is State -> Callback<Void> so alls I have to do is use a Subject for each type of lifecycle event and submit its onNext when the callback is triggered.
But one type of event has shape either State -> Boolean or State -> Callback<Boolean>.
I feel like I should be able to model this with a subject representing the delta between the last state rendered/received.
However, I don't know how this fits into the reactive style.
Lets assume ONE team uses the API to return data to a SPA in the browser.
Is it RESTful to return data which is specially prepared for the UI of the SPA?
Instead the client could prepare the data specially with JS what many want to avoid.
WHAT means reshaped? =>
public async Task<IEnumerable<SchoolclassCodeDTO>> GetSchoolclassCodesAsync(int schoolyearId)
{
var schoolclassCodes = await schoolclassCodeRepository.GetSchoolclassCodesAsync(schoolyearId);
var allPupils = schoolclassCodes.SelectMany(s => s.Pupils).Distinct<Pupil>(new PupilDistinctComparer());
var allPupilsDTOs = allPupils.Select(p => p.ToPupilDTO());
var schoolclassCodeDTOs = schoolclassCodes.Select(s => s.ToSchoolclassDTO());
// Prepare data specially for UI DataGrid with checkboxes
foreach (var s in schoolclassCodeDTOs)
{
foreach (var p in allPupilsDTOs)
{
var targetPupil = s.Pupils.SingleOrDefault(pupil => pupil.Id == p.Id);
if(targetPupil == null)
{
p.IsSelected = false;
s.Pupils.Add(p);
}
else
{
targetPupil.IsSelected = true;
}
}
}
return schoolclassCodeDTOs;
}
This is a good question. Problem is, it's likely that only you have the answer.
tl;dr Good-old "it's application-specific".
You probably need to look at this as a continuum, not a binary decision. On one extreme you have the server generate HTML views and is thus responsible for a lot of UI concerns, on the other you have the server expose a data model with very generic CRUD functionality, and is thus not responsible for any UI concerns. Somewhere in the middle, you can also find a design where a server interface is specific to an application but does not necessarily expose HTML (although with the magic of conn-neg everything falls into place) for possibly very obvious reasons (when it comes to so-called SPAs).
So, what should you choose?
I expect some people to advise you to decouple client and server as much as possible, but I personally believe there is no such ultimate "good practice". In fact, it might be premature. Design your logical components first, and then decide on where and how they should run.
Remember:
A good architecture allows major decisions to be deferred and maximizes the number of decisions not made.
(Uncle Bob)
Why? Well, because these (domain logic and execution environment) are truly separate concerns: they might evolve independently. For example, you might decide to create a thinner client for mobile and a thicker client for desktop if the application is compute-intensive (e.g. to save battery power). Or, you might do the exact opposite if the application is network-intensive (e.g. to save roundtrips when connectivity is bad, also consider "offline-first"). Maybe you'll provide all variants and let the user choose, or maybe choose automatically based on available resources -- whatever the requirements warrant and however they might change.
I think these are more appropriate questions and architectural decisions to make (but again: only after you've already designed the boundaries of your logical components). These clearer requirements will help you decide which components of your application should run where. They will drive the way you represent your boundaries (whether they be internal or remote APIs, private or public) but not how you shape them (that's already done). Your RESTful API (if you decide you need one and that a REST-style architecture is appropriate) is just a representation for an arbitrary boundary.
And that's how you will eventually answer your own question in the context of your scenario -- which should hopefully have become very intuitive by then.
End note: While having the domain logic strictly shape your boundaries is nice and pure, it's inevitable that some concerns pertaining to the execution environment (like who controls certain network hosts, where the data should reside, etc) will feed back into the domain design. I don't see it as a contradiction; your application does influence whatever activity you're modelling, so its own concerns must be modelled too. Tools also influence the way you think, so if HTTP is a tool and you're very good at using it, you might start using it everywhere. This is not necessarily bad (e.g. the jury is still out on "micro-services"), though one should be aware that knowing too few tools often (not always) push developers to awkward corners. How could I not finish with: "use the right tool for th--" ah, it's getting old, isn't it ;).
I made a question that looked like this before and people didn't understand. I'll try to be more concise and will use a comparison with Ajax used in web applications.
I have the main form. There I have one button that will extract data from a field and send to a second form, a window that will pop up and display the data in a more organized way (A TListBox).
I would like to know if there is a way to on SecondForm.Show to send this data as parameters, like SecondForm.Show(data).
The comparison to Ajax is that when you make an Ajax Call from a html page to the server, you send encapsulated data, the server receives it and works with it.
I want my main form to send the data, the second form to receive the data and use it.
Is it possible? How?
If I were you, I'd add parameters to the form's constructor so that it has all the information it needs from the moment it exists.
constructor TSecondFrom.Create(AOwner: TComponent; AParam1: Integer; const AParam2: string);
begin
inherited Create(AOwner);
// Use or store parameters here
end;
You're welcome to write some other method instead, though, and you can call it Show, if you want. Define it to accept whatever parameters you need, and when you're ready, you can call the inherited zero-argument Show method to make the form appear.
procedure TSecondForm.Show(AParam1: Integer; const AParam2: string);
begin
// Use parameters here.
inherited Show;
end;
Decide in what form to send the data from Main to SecondFrom. For instance in a TStringList. Fill the striglist on the mainform, use it as parameter in SecondForm.Show.
This is the answer from your other question that you deleted. It still applies I believe.
Always prefer passing state in the constructor if that is viable.
Problems with state occur when it changes. Multi-threading is confounded by mutating state. Code that makes copies of state become out of sync if the state changes later.
Objects cannot be used while the constructor is executing. That's the time to mutate state since no other party can see the effects of that mutation. Once the constructor returns the newly minted object to its new owner, it is fully initialized.
It's hard with an OOP style to limit all mutation to constructors. Few, if any, libraries admit that style of coding. However, the more mutation you push into the constructor, the lower the risk of being caught out later.
In reality, for your scenario, I suspect that these risks are minimal. However, as a general principle, initialization during construction is sound and it makes sense to adhere to the principle uniformly.
I'm evaluating backbone.js as a potential javascript library for use in an application which will have a few different backends: WebSocket, REST, and 3rd party library producing JSON. I've read some opinions that backbone.js works beautifully with RESTful backends so long as the api is 'by the book' and follows the appropriate http verbage. Can someone elaborate on what this means?
Also, how much trouble is it to get backbone.js to connect to WebSockets? Lastly, are there any issues with integrating a backbone.js model with a function which returns JSON - in other words does the data model always need to be served via REST?
Backbone's power is that it has an incredibly flexible and modular structure. It means that any part of Backbone you can use, extend, take out, or modify. This includes the AJAX functionality.
Backbone doesn't "care" where do you get the data for your collections or models. It will help you out by providing an out of the box RESTful "ajax" solution, but it won't be mad if you want to use something else!
This allows you to find (or write) any plugin you want to handle the server interaction. Just look on backplug.io, Google, and Github.
Specifically for Sockets there is backbone.iobind.
Can't find a plugin, no worries. I can tell you exactly how to write one (it's 100x easier than it sounds).
The first thing that you need to understand is that overwriting behavior is SUPER easy. There are 2 main ways:
Globally:
Backbone.Collection.prototype.sync = function() {
//screw you Backbone!!! You're completely useless I am doing my own thing
}
Per instance
var MySpecialCollection = Backbone.Collection.extend({
sync: function() {
//I like what you're doing with the ajax thing... Clever clever ;)
// But for a few collections I wanna do it my way. That cool?
});
And the only other thing you need to know is what happens when you call "fetch" on a collection. This is the "by the book"/"out of the box behavior" behavior:
collection#fetch is triggered by user (YOU). fetch will delegate the ACTUAL fetching (ajax, sockets, local storage, or even a function that instantly returns json) to some other function (collection#sync). Whatever function is in collection.sync has to has to take 3 arguments:
action: create (for creating), action: read (for fetching), delete (for deleting), or update (for updating) = CRUD.
context (this variable) - if you don't know what this does it, don't worry about it, not important for now
options - where da magic is. We only care about 1 option though
success: a callback that gets called when the data is "ready". THIS is the callback that collection#fetch is interested in because that's when it takes over and does it's thing. The only requirements is that sync passes it the following 1st argument
response: the actual data it got back
Now
has to return a success callback in it's options that gets executed when it's done getting the data. That function what it's responsible for is
Whenever collection#sync is done doing it's thing, collection#fetch takes back over (with that callback in passed in to success) and does the following nifty steps:
Calls set or reset (for these purposes they're roughly the same).
When set finishes, it triggers a sync event on the collection broadcasting to the world "yo I'm ready!!"
So what happens in set. Well bunch of stuff (deduping, parsing, sorting, parsing, removing, creating models, propagating changesand general maintenance). Don't worry about it. It works ;) What you need to worry about is how you can hook in to different parts of this process. The only two you should worry about (if your wraps data in weird ways) are
collection#parse for parsing a collection. Should accept raw JSON (or whatever format) that comes from the server/ajax/websocket/function/worker/whoknowwhat and turn it into an ARRAY of objects. Takes in for 1st argument resp (the JSON) and should spit out a mutated response for return. Easy peasy.
model#parse. Same as collection but it takes in the raw objects (i.e. imagine you iterate over the output of collection#parse) and splits out an "unwrapped" object.
Get off your computer and go to the beach because you finished your work in 1/100th the time you thought it would take.
That's all you need to know in order to implement whatever server system you want in place of the vanilla "ajax requests".
When implementing algorithms and other things while trying to maintain reusability and separation patterns, I regularly get stuck in situations like this:
I communicate back and forth with an delegate while traversing a big graph of objects. My concern is how much all this messaging hurts, or how much I must care about Objective-C messaging overhead.
The alternative is to not separate anything and always put the individual code right into the algorithm like for example this graph traverser. But this would be nasty to maintain later and is not reusable.
So: Just to get an idea of how bad it really is: How many Objective-C messages can be sent in one second, on an iPhone 4?
Sure I could write a test but I don't want to get it biased by making every message increment a variable.
There's not really a constant number to be had. What if the phone is checking email in the background, or you have a background thread doing IO work?
The approach to take with things like this is, just do the simple thing first. Call delegates as you would, and see if performance is OK.
If it's not, then figure out how to improve things. If messaging is the overhead you could replace it with a plan C function call.
Taking the question implicitly to be "at what point do you sacrifice good design patterns for speed?", I'll add that you can eliminate many of the Objective-C costs while keeping most of the benefits of good design.
Objective-C's dynamic dispatch consults a table of some sort to map Objective-C selectors to the C-level implementations of those methods. It then performs the C function call, or drops back onto one of the backup mechanisms (eg, forwarding targets) and/or ends up throwing an exception if no such call exists. In situations where you've effectively got:
int c = 1000000;
while(c--)
{
[delegate something]; // one dynamic dispatch per loop iteration
}
(which is ridiculously artificial, but you get the point), you can instead perform:
int c = 1000000;
IMP methodToCall = [delegate methodForSelector:#selector(something)];
while(c--)
{
methodToCall(delegate, #selector(something));
// one C function call per loop iteration, and
// delegate probably doesn't know the difference
}
What you've done there is taken the dynamic part of the dispatch — the C function lookup — outside the inner loop. So you've lost many dynamic benefits. 'delegate' can't method swizzle during the loop, with the side effect that you've potentially broken key-value observing, and all of the backup mechanisms won't work. But what you've managed to do is pull the dynamic stuff out of the loop.
Since it's ugly and defeats many of the Objective-C mechanisms, I'd consider this bad practice in the general case. The main place I'd recommend it is when you have a tightly constrained class or set of classes hidden somewhere behind the facade pattern (so, you know in advance exactly who will communicate with whom and under what circumstances) and you're able to prove definitively that dynamic dispatch is costing you significantly.
For full details of the inner workings at the C level, see the Objective-C Runtime Reference. You can then cross-check that against the NSObject class reference to see where convenience methods are provided for getting some bits of information (such as the IMP I use in the example).