I'm writing a keyboard extension which resources are more limited. I am trying to pick a code structure and need some help. The project will have multiple classes or struct to encapsulate a fair amount of data, with functions to manipulate them. There will be one or two variables in each class to define the state which UI will need to be updated accordingly.
Two design choices
Make each class as EnvironmentObject and publish the variables the UI needs to observe. This would mean I need to set ~10 classes as EnvironmentObject.
Consolidate the state variables to one class which is set as EnvironmentObject. It is less clean but I will only need to set 1 class as EnvironmentObject.
My question is: Is there any penalty in picking 1?
Everybody's application will be slightly different in terms of setup.
However, the ObservableObject protocol means that each object – whether declared as an environment object or a state object – has a single objectWillChange() method which fires whenever any observable attribute changes. The #Published property wrapper automatically handles that for us.
But that message only tells the SwiftUI rendering system that something in the object has changed; the rendering system then has to work out whether any of the changed object's new properties require the UI to be redrawn.
If you have a single environment object with a huge amount of published properties, it is possible that might result in a performance hit. If you split that single object into multiple environment objects, and each view only opts into the environment object it needs to watch in order to redraw itself, you might see some performance improvements, because some views' dependencies would not have changed and so SwiftUI would be able to, in essence, skip over them.
But having multiple objects can have downsides. As a developer, it can be harder to maintain. You would also need to ensure that each environment object was truly independent of all the others (otherwise you'd get cascading updates which would negate any performance improvements, and possibly make things worse). It can sometimes be easier as a developer, too – if you get the separation of functionality right, it can help keep your code organised and maintainable. Getting that sweet spot can be really tricky, though.
So there's a balance to be struck. If performance is turning out to be an issue, Instruments ought to be able to help in terms of deciding where your individual pain points are and how to start addressing them.
But generally, I'd start from the code structure that makes it easier to write and debug for you. It's easier to optimise well-structured code than it is to maintain highly optimised but hard-to-read code!
(Disclaimer: I'm not a deep expert in SwiftUI and Combine internals – the above is gained from my practical application experience, as well as facing similar questions in the world of ReactJS and Redux in the JavaScript world, where single large state objects face similar scale issues)
Related
Is there any disadvantage in terms if performance or something by marking a variable as a private on a class or struct on Swift or view on SwiftUI?
I know about the encapsulation advantage you get when the variable is private, preventing access from other code outside the scope. My concern is about performance.
I can say (Flutter experience) it won't slow down your code or app. Its just usable from this one file... but you mentioned that earlier in your question.
But no this won't make your app slower.
Swift is usually a compiled language (interpreters may exist but they are definitely not common) and once the source code is compiled (which is not something the end user will have to sit through, anyway), there is no performance increase or decrease caused by access control. In fact, after the code has been compiled, there is no such thing as access control--the process is now represented as machine code and it will simply run the way it was written to run. Access control is there for the programmer, not the machine. And in terms of compilation performance, I've yet to come across any literature to suggest a compiler will perform better or worse because of access control.
I am writing a relatively simple app in which I need to do some state management. After looking trough all the documentations and many rewrites I need help with understanding how to actually structure and use the tools provided for state management, like the classic providers and riverpods providers.
My questions concern the arrangement of Models which hold state, nesting those objects and redrawing only parts of the widget tree.
My project is a mobile app which lets you keep a shared log of your fuel stops and the money you paid. If you share a vehicle with one or more persons, you all enter a pool in which the fuel entries are displayed. There are models for users, log entries and pools. When a user is logged in, a number of available pools need to be fetched and stored and after selection the members and logs of the selected pool need to be fetched. If updates are made to e.g. the name of the user or a specific log entry, the current view might need to get updated.
Now, the current code is a mess where much of the state gets stored in a pool model, which is provided by a ChangeNotifierProvider at the root of the widget tree. Since I was under the impression that one should try to update as little of the UI as possible I tried to split my state up into different models, which are nested into each other, for example the LogEnries are part of the pool model, but are themselves ChangeNotifier. The idea was to be able to selectively refresh and listen to parts of the state. This has lead to horrible code where I sometimes need to call notifyListeners() from outside my pool model / state object. I want to remedy the situation with a rewrite of my state management.
Now to my question: How would one structure state generally (or my state specifically) for it to be efficient and pleasing to the magical gods that created the libraries? This stackoverflow question from two/one year ago asks a similar question and the provided answers recommend to do one of these things:
Leave the state nested and inject the Models into each other with ChangeNotifierProvider, but apparently that's not great if the provided objects are models
Put everything in one state object whose provider sits at the top of the tree, and maybe use a selector to only refresh the part of the UI which is affected
Nest models but only provide the root an as immutable object and update state by calling functions and copying stuff
I think another approach would be to
use the recently released riverpod package and create a lot of providers for every need.
Now I have no idea which of these approaches is better or valid or if they all work perfectly fine. My questions regarding the corresponding approach would be:
In what order would I nest them? I intuitively nested the Pools in the root state object, the log entries in the pool model but dependency wise I'd probably have to go User->AppState->selectedPool->Logs, resulting in a possible statement of logEntry.selectedPool.appState.user. Feels weird.
This might work but I'd always get the whole state in one model (which is arguably not that big in my use case). One could use the Selector to only refresh parts of the UI, but I think there was a problem with using mutable objects for that because Selector needs to be able to tell if something has changed. Also, as far as I understood I could only use the stuff I've specifically selected and not listen to a different property than the one I'd use afterwards for UI refresh.
Same as the above, also a lot of boilerplate code.
This one seems the most exciting, since riverpod and it's providers look really cool. But would I nest my state or just provide everything globally and maybe inject a few things with ref.watch() in the creating method? Would I create a new provider for every change which I want to listen to separately or is it cheaper to figure that out once I get the object? And since the riverpod equivalent of ChangeNotifier, StateNotifier features only one Value (I think?), would I create a new provider for every important piece of information I need?
As you might have noticed, I tried to look up a lot of stuff but haven't quite figured out how to translate all the techniques to an actual project beyond code demonstration size. I would be immensely thankful if someone could explain to me the correct approach to structuring state management in general, which approach might be the best one for my specific situation and most importantly what might be the reason to decide against other ones. Please don't hesitate to point out any mistakes I made, although stackoverflow has a reputation suggesting this might not be an issue. If anybody wants to have a look at my code, there's a branch where I started to work towards a better modularization starting with an AppState Model and reworking some functions.
My understading of MVVM is that View is responsible for user presentation logic, view-model for interaction logic and specific data transformation from UI independent model classes and model itself representing business domain from data point of view.
Key point is that model classes should be UI indenpendent.
Here comes UWP dev model and Template 10 BindableBase from which model classes are supposed to be derived from. It's quite handy but it ties the model to the specific UI implementation, namely UWP + Template 10.
I've got a data access layer spitting out domain object I want to feed directly as model to the UI. The domain is quite complicated. What I don't wan't to do is to reimplement data domain objects in the UI nor I wan't to pollute it with UI specific functions.
Any thoughts on this?
thank you
With all due respect, I think you might be thinking about this wrong.
The goal of MVVM is to separate your logic exactly like you describe. The intent of this separation is to simplify your code. Does this make view-models testable and separate concerns? Yes. But, I argue those are secondary goals to simplicity – which keeps your code manageable and maintainable.
Here’s another way to say it.
MVVM is a terrific approach for XAML applications. But, if MVVM did not make your view-models testable or separate concerns, MVVM is still a terrific approach for XAML applications because it simplifies code so much – smaller, simpler, and isolated.
Worth every penny.
You might argue that MVVM creates another layer of code that developers must understand before they can reason and contribute to the base. I would agree. But I would NOT conclude that MVVM is therefore impracticable. That added layer is trivial compared to the otherwise coupling of logic.
Now to your question.
Your data layer object, probably a data transfer object, is remarkably like your UI object, probably a model. Your developer instincts compel you to unite similar things through polymorphism, code generation, or interfaces to avoid added opportunity for bugs, complexity, and tests.
Consider this.
Objects created in your data layer are created for your data layer. Likewise, objects created in your UI layer are created for your UI layer. Similarity in structure is a byproduct of similarity in domain. Of course, they are similar: their intents, however, are not. Why would you merge them?
You already see the problem.
The only differences you have are, perhaps, a data layer constructor taking in some type of data reader and populating properties. That is perfect for the data layer but inappropriate for the UI layer. Your UI layer might have messaging events or a custom method to handle interaction. That is perfect for the UI layer but inappropriate for the data layer.
So, where’s the similarity?
I think developers, including myself, tend to see similar things as potentially identical. Your data-relevant features do not belong in your UI nor the other way. Instead what you need to do is to see the similarity but recognize that they are drastically different objects and don’t deserve to be made the same.
We’re lazy.
Duplicate code isn’t about tests and bugs. Not really. It’s about how we, as developers, are so obsessed with “Work smarter not harder” that we tend to look down on “working hard”. If an object is built for one layer it should be in that layer, and not shared by another. I strongly feel this way.
The right solution
The easiest solution is to let your DTO serialize on the data layer from DataLayer.Object and then deserialize it on the UI layer as UILayer.Model. This is the easiest and simplest approach and adequately allows you to coerce the data and the object API for the use by its unique layer.
This means there are two nearly identical objects: one on the data layer and one on the UI layer. But it does NOT mean there are two identical objects. They are not identical because they have functionality unique to each of the tiers, each of the layers.
What if there is no functionality?
It makes sense to wonder if this applies to objects and models that have no added functionality. I believe strongly that objects and models in any layer without added functionality are simply waiting for that functionality to be added. Should you assume there is none and build to that assumption, you force the future you or forthcoming maintenance developers to never add layer-specific functionality.
Does that matter?
I think it does. Why? Because layer-specific functionality in an object or model allows me to add sophistication (not complexity) to my architecture and implementation within the context of the data object or model, and not in an external construct like a manager, helper, or utility. There is no question that Type.DoSomething is easier than Helper.DoSomethingForType(Type).
I actually have three
Just so you know, here’s how I do it: in my projects, you and I probably have similar data layers/services. But, my UI layer actually has TWO models – not one. (Let’s pretend Users is the data type.) My UI layer has json.User and Models.User. The json.User is a bare bones structure, deserialized from my service, and matches the service object exactly. But my UI rarely needs the Service API surface/structure.
Service(DataLayer.User) > | net | > UI(json.User > Models.User)
So, then json.User is used to create Model.User, whose structure meets my UI layer’s needs exactly - including methods, events, and messaging constructs. I am free to change my Models.User as I add features to my UI, too - including merging data from other/new data services. Also, Model.User can implement INotifyPropertyChanged where the data layer objects and the json objects never would (or need to).
Consider this
If you keep your models separate and you keep one codebase from improperly influencing another, then any change in your database or change in your data service/layer does not REQUIRE a change in your UI layer, even if you change the API surface. Only json.User changes or tweaks to your deserialization hints in your UI layer are impacted. To me, this only makes sense.
But what about testing and bugs?
Tests rarely test structures. Data structures are the simplest thing you can add to a solution and rarely contribute to complexity. You can reason over a structure in about a second. You can reason over a method in about a minute. Structures have very little cost. They also have very little construction cost. If you copy/paste the initial structure from your data layer to your UI layer – that’s what we all do. But that is NOT duplicating code. It is just building similar objects appropriately decoupled.
That’s what I think.
While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.
I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.