I am just studying the code of Sacha Barbers MVVM framework Chinch and I saw this in the xxxViewModel.cs file:
DataService.FetchAllOrders(CurrentCustomer.CustomerId.DataValue);
DataService is a Static class. Being a junior dev I am only used to Interfaces with Data services. Why is that class static?
Or do you think he made it just for the example?
So is that a good approach?
In fairness, I don't know what's going on in FetchAllOrders - it might be programmed to behave well.
In practical experience, I've seen static classes used poorly to maintain the infrastructure needed to do data access. I say "poorly" because these implementations (that I've seen) were not made thread-safe. When the code was deployed to a multi-user environment (such as a web application), it exploded.
Use static classes for classes that contain no state (and are threadsafe as a result). Classes with just methods, for example.
Use static classes for classes where it is intended to make access serial with locks (threadsafe).
Use static classes in throw-away code to avoid the design overhead of constructing/maintaining/passing instances.
Look into the .net framework and see which classes Microsoft made static and meditate on why.
Related
I read a lot of ViewModel derived from AndroidViewModel, which then requires of course an application reference.
class SomeViewModel(application: Application) : AndroidViewModel(application)
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
Not all codebases are created equal. AndroidViewModel can be a useful tool for incremental refactoring in "legacy" codebases that don't have many abstractions or layering in place (read: Activity/Fragment god objects).
As a bridge from a "legacy" codebase, it makes sense to use it in this situation.
But why would one do this? It hurts me to see application handed over to ViewModel. What would be an acceptable use case for this?
The use case for AndroidViewModel is for accessing the Application. In a "legacy" codebase, it's relatively safe to move "Context/Application-dependent code" out of Activities and Fragments without requiring a risky refactor. Accessing Application in the view model will be necessary in that scenario.
If there is any reason to use AndroidViewModel, can one not derive from ViewModel + use dagger2 for the application inject?
If you're not injecting anything else, then at best it's a convenient way to get an Application reference without having to type cast or use any DI at all.
If you're injecting other members, be it with a DI framework or ViewModelFactory, it's a matter of preference.
If you're injecting a ViewModel directly into your Activity/Fragment, you're losing the benefits the platform is providing you with. You'll have to manually scope the lifecycle of your VM and manually clear your VM for your UI's lifecycle unless you also mess around with ViewModelStores or whatever other components are involved in retention. At that point, it's a view model by name only.
Because it requires an application reference, it can provide Context which can be useful (e.g: for a system service).
See - AndroidViewModel vs ViewModel
I am an experienced .NET/C# developer but new to pretty much all of the technologies/libraries here including SQL/DB work.
I am developing a project with an Azure/Entity Framework .NET backend and portable .Net APK for consumption in a number of other projects. I am trying to follow recommended practices and guidelines, but it's surprisingly hard to find documentation. I find myself repeatedly feeling like I'm fighting against the system, and slowly beating out a seemingly endless succession of fires with a blunt table spoon.
I find myself wondering if the overall architecture I'm using is the fundamental problem here. I prefer to pretend I'm not merely incompetent.
Current Structure
DTO contracts project
Interfaces for the DTO classes shared between the other two projects
Backend project
Implementations of the DTO interfaces + conversion to/from model classes
Code first database model classes
TableController<SOME_DTO_CLASS> implementations
ApiController for non-query operations
Portable SDK library project
Implementations of the DTO interfaces + conversion to/from SDK classes
SDK exposed classes for use from other applications
Service class that wraps MobileServiceClient and IMobileServiceTable and exposes SDK classes
Motivation/Implementation
Contract interfaces
The motivation for the DTO contract interfaces is to get as far away from magic strings / relying on member names as possible. These are interfaces rather than classes because TableController<T> requires implementations of ITableData, which is not available for use in the portable DTO contracts project.
Backend
The TableController<SOME_DTO_CLASS> classes GET methods currently refer to the current context (NOT this.Query()) and .Select() to create matching instances of the DTO classes. Lazy-loading is intact. These GET methods apply a .Where() with this.User to filter out only those entities the user has permission to access.
The Code-First model entirely derives from EntityData, even if the class is not going to be exposed via a TableController<T>. Navigation properties are used to types that are NOT exposed via their own TableController<T>. The fluent API is used to describe relationships.
The DTO classes expose their relation properties as the interface types rather than their concrete types because that's how interfaces work.
SDK
Currently this uses IMobileServiceTable but will likely switch over to IMobileServiceSyncTable at some point.
The DTO classes expose their relation properties as the interface types rather than their concrete types because that's how interfaces work.
Current flaming spoon target
Right now I've got the SDK successfully exposing it's own SDK types pulled down from the database. DB model -> DB DTO --> *MS Code* --> SDK DTO -> SDK exposed class all works.
Sort of.
The DB DTO classes' properties that expose other DB DTO classes appear to be ignored in transmission despite being part of the IQueryable returned in the GET method. I cannot retrieve them using $expand= as apparently The specified type member 'TestClass' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. This still occurs if I switch from interface property types to concrete ones.
I could potentially avoid this issue by only including foreign key IDs and fetching linked entities separately in the SDK, but that seems highly inefficient and somewhat very nope.
Get to the question you 4AM fool!
Dis gud?
More specifically (and formally) is this current project structure reasonable and likely to be sustainable? Are there any obvious flaws or oversights that will prevent this from working?
Assuming this is reasonably reasonable, what is the proper way to tackle the DTO $expand issue?
The $expand attribute is the way to go, but unfortunately the Azure Mobile client SDK blocks this in the query string. It will be fixed in the future, but for right now your best bet is to use an attribute on the server side to add the query string on incoming requests.
For an example of this, see https://github.com/paulbatum/FieldEngineerLite/blob/master/FieldEngineerLite.Service/Helpers/ExpandPropertyAttribute.cs. The sample is for Azure Mobile Services, but that code can be easily applied to the Azure Mobile Apps server SDK.
I share a Data Transfer Object between an C# Azure Mobile Services server and client. I use the same class in both applications.
The TableController class used by Azure mobile services requires the DTO to inherit from 'EntityData', which in turn implements interface 'ITableData'.
ITable Data lives is part of reference:
Microsoft.WindowsAzure.Mobile.Service.Tables
I have not figured out how to include that reference without installing the entire server-side mobile services package in nuget:
WindowsAzureMobileServices.Backend
That includes OWIN, and many other references the client does not need. This is what I am doing currently. This works for a desktop application I am currently working on, but I think it will not work for universal apps and windows phone apps.
I also looked at microsoft's samples for mobile services, and there they use separate classes as DTOS for server and client.
Is it really the case that we have to write the same code twice?
No, but you could better make use of Shared Projects, and partial classes.
Your Shared Project will have common properties for the entities.
Other projects will reference this Shared one, and can add some other properties to shared entities, still using partial classes.
I have precise experience with AMS, so I know what you are meaning.
In my experience, is anyway not realistic to think to have exactly the same entity classes for client and server.
For instance, in so called Portable Class Libraries you can have a very small subset of framework, and references available.
Other than properties, you normally put attributes on POCO class files. On the client you may have some attributes that aren't available/meaningful for the server (e.g. SQLite attributes), or viceversa. You may can get stuck in this situation also with the shared projects approach I suggest, but it could be managed there with what so called preprocessor directives.
Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.
I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?
Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)