When would I use an AppDomain? - .net-2.0

I'm fairly new to reflection and I was wonder what I would use a (second) AppDomain for? What practical application would one have in a business application?

There are numerous uses. An secondary AppDomain can provide a degree of isolation that is similar to the isolation an OS provides processes.
One practical use that I've used it for is dynamically loading "plug-in" DLLs. I wanted to support scanning a directory for DLLs at startup of the main executable, loading them and checking their types to see if any implemented a specific interface (i.e. the contract of the plug-in). Without creating a secondary AppDomain, you have no way to unload a DLL/assembly that may not have any types that implement the interface sought. Rather than carry around extra assemblies and types, etc. in your process, you can create a secondary AppDomain, load the assembly there and then examine the types. When you're done, you can get rid of the secondary AppDomain and thus your types.

99% of the time I would avoid additional AppDomains. They are essentially separate processes. You must marshal data from one domain to the other which adds complexity and performance issues.
People have attempted to use AppDomains to get around the problem that you can't unload assemblies once they have been loaded into an AppDomain. So you create a second AppDomain where you can load your dynamic Assemblies and then unload the complete AppDomain to free the memory associated with the Assemblies.
Unless you need to dynamically load & unload Assemblies they are not really worth worrying about.

AppDomains are useful when you have to have multiple instances of a singleton. For example, you have an assembly that implements a communication protocol to some device and this assembly uses singletons. If you want to instantiate multiple instances of this class (to talk to multiple devices) and you want the instances to not interfere with one another, then AppDomains are perfect for this purpose.
It does make programming more difficult, however, as you have to do more work to communicate across AppDomain boundaries.

Related

GWT Module Design

I have an app with two components.
A customer facing one for submitting restaurant orders.
A vendor facing one for viewing restaurant orders.
Should I have two modules with different entry points as there is no shared code(except for the domain model objects) between the components?
There is one reason I can think of for why you may want to do this - which is to reduce download size, since some screens/logic may not be used by the customer (and you want the customer pages to load as fast as possible). However you can also achieve this with code splitting: https://developers.google.com/web-toolkit/doc/latest/DevGuideCodeSplitting
I think having two modules is fine as well. No big deal there.
If you are not going to deploy them on two separate nodes, I would go with one module. Because you have to maintain only one I18n files, less static files (html), there will be just one module descriptor (no duplication).
If you decide to use just one module, the code splitting is a good thing to consider to reduce the size of JS user have to download.
There can't be 100% correct answer, it really depends on your project.
Separation into two compiled modules might be good idea, in case when the size of your common logic, which has to be shared between two modules is quite small compared to customer/vendor specific logic and most of the time you are writing code only for customer/vendor. In such case you will get faster refresh time in development mode and faster compilation of individual modules to the case when everything is merged together.
But there is catch, at some point of time, there might be a requirement to create merged customer/vendor mode, because there are users who are customers and vendors at the same time.
I personally prefer approach, when different logical parts of application get their own gwt module, and then there is a root module which links all of them together, plus you have couple of DevOnly modules, which allow you to start only some specific part of application. Example module structure:
Customer module - not compiled separately, depends on Common module
Vendor module - not compiled separately,depends on Common module
Common module - not compiled separately
App module - compiled separately, depends on Customer and Vendor modules
VendorStandalone module - compiled separately, depends on Vendor module, used only for
development
CustomerStandalone module - compiled separately, depends on Customer module, used only for development
Such structure allows you to have fast developing mode ( if possible at all), and at the same time you prepared for case when Vendor & Customer functionality have to be provided together.
My choice of design(Using MVP):
1)Single module
2)Same login page (User pojo must have type i.e vendor or customer).
3)In OnmoduleLoad based upon type I'l open Corresponding vendor or customer presenter
Why??
1)Code re-usability.
2)Reducing maintenance of 2 modules.
Well,i am also waiting to see more design options.
Please refer

What does "monolithic" mean?

I've seen it in the context of classes. I suspect it means that the class could use being broken down into logical subunits, but I can't find a good definition. Could you give some examples?
Thanks for the help.
Edit: I love the smart replies, but I'm obviously referring to "monolithic" within a software context. I know about monoliths, megaliths, dolmens, and all the stone-related contexts. Gee, I have enough of them in my country...
Interesting question. I don't think there are any formal definitions of what a monolithic class is, but you've got the idea. A class that contains multiple components that are logically unconnected, or pointlessly coupled, is a monolithic class.
If you've read The Pragmatic Programmer, which I strongly recommend, you can define a monolithic class as an anti-pattern that goes against almost everything from that book.
As for examples, you'll find more in the realm of chip and OS design, where there are formal definitions of monolithic chips/kernels, which are similar to a monolithic class. Here are some examples, although each of them can be argued against being on this list:
JOGL - Java bindings for OpenGL. This could be arguable, and with good reason.
Most academic projects - For obvious reasons.
If you started programming alone, rather than joining a team, then chances are you can open one of your first projects, and there will be a class that is monolithic.
If you look up the etymology of the word you'll see it comes from the Greek monos (single) and lithos (stone). In the context of software as you mention it, it describes a single-tiered application in which the code for the user interface and the data access are combined into a single program from a single platform.
"Monolithic" is a term that has been used to flame succesful software. This link exposes the assumptions inherent in the term, and their limited usefulness.
The basic assumption is that a system works better if it is built from software components that each have an individual, well-defined task. Intuitively, this seems right. If each component works, the entire system must work, right?
In reality, it's not that easy. A larger, compositional (non-monolithic) system can miss a critical function, even when there is no single component to blame. This happens when the architectural design fails to allocate a function to any specific component. This can happen especially if it's a function which doesn't map cleanly to a single component.
Now Linux (to continue with the linked example) in reality is not monolithic. It has a modular userspace on top of a monolithic kernel, a userspace that comes with many separate utilities. Except when it doesn't.
My definition of a Monolithic design in software development, is a design which requires additional functionality to be added to a single indivisible block of code.
PRO:
Everything is in one place, and therefore easy to find
Can be simpler, given there less relations to consider (can also be more complex see cons)
CONS:
Over time as functionality is added the complexity of the system may exponentially increase, to the point new features are extremely hard or impossible to implement
Can make it difficult for multiple developers to work with e.g Entity Framework EDMX files have the entire database in a single file which can be extremely difficult for multiple developers to work on.
Reduced re-usability, by definition it does not have smaller components which can be then reused and re-purposed to solve other problems, unless a complete copy of the code is made and then modified.
A monolithic architecture is a model of software structure which is created as one piece where all Rails tools (ActionMailer, ActiveJob, ActionCable, etc.) can be gathered together with the code that these tools applies. The tools are not connected with each other but they are also not autonomous.
If one feature needs changes, it will influence the work of the whole process and other features because they are parts of one process.
Let’s recall what Ruby on Rails is, what it can offer, its pros and cons. Its most important benefit is that it is easy to work with.
If you write rails new you immediately get a new application at once, then you can create any REST API you want and use Rails helpers and generators, which makes development even easier.
If you need to send emails in your Rails app, then use Rails ActionMailer. When you need to do some hard processing, ActiveJob will help you. With Rails 5 you will also be able to use websockets out of the box. Thus, it will be easy to create chats or make your application more interactive.
In case you use correct DSL syntax, you can use all that and even more immediately. Moreover, you don’t have to know everything about the internal implementation of these tools, consider it’s DSL, and receive the expected result.
It means something is the opposite of modular. A modular application can have parts, referred to as modules, replaced without requiring replacement of the entire application. Whereas a monolithic application, after having a part fixed or upgraded, must be replaced in it's entirety.
From Wikipedia: "Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement."
So in the context of a monolithic class, all its features are self-contained and if you want to add or alter a feature to the class you would need to alter/add code in the class and recompile it. Conversely a modular class exposes access to functionality which is implemented externally. For example a "Calculator" class may use a separate "Add" class for actually adding numbers; call a "Multiply" function from a separate library; or even call an "Amortize" function from a web service. As long as the each of these functional parts can be altered externally from the class, it is modular.

Common Libraries at a Company

I've noticed in pretty much every company I've worked that they have a common library that is generally shared across a number of projects. More often than not this has been a single companyx-commons project that ends up as a dumping ground for common programs including:
Command Line Parsers
File Utilities
Framework Helpers
etc...
Some of these are well thought out and some duplicate functionality found in Apache commons-lang, commons-io etc..
What are the things you have in your common library and more importantly how do you structure the common libraries to make them easy to improve and incorporate across other projects?
In my experience, the single biggest factor in the success of a common library is user buy-in; users in this case being other developers; and culture of your workplace/team(s) will be a big factor.
Separate libraries (projects/assemblies if you're in .Net) for different application tiers is essential (e.g: there's obviously no point putting UI and data access code together).
Keep things as simple as possible; what you don't put in a common library is often at least as important as what you do. Users of the library won't want to have to think, so usage needs to be super easy.
The golden rule we stuck to was keeping individual functions focused on a single task - do one thing and do it well (or very very well); don't try and provide something that tries to take every possibility into account, the more reusable you think you're making it - the less likely it is to be used. Code Complete (the book) has some excellent content on common libraries.
A good approach to setting/improving a library up is to do regular code reviews and retrospectives; find good candidates that you've already come up with and consider re-factoring them into a library for future projects; a good candidate will be something that more than one developer has had to do on more that one project (for example).
Set-up some sort of simple and clear governance of the libraries - someone who can 'own' a specific library and ensure it's overal quality (such as a senior dev or team lead).
I have so far written most of the common libraries we use at our office.
We have certain button classes that are just slightly more useful to us than the standard buttons
A database management class that does some internal caching and can connect to ODBC, OLEDB, SQL, and Access databases without even the flip of a parameter
Some grid and list controls that are multi threaded so we can add large amounts of data to them without the program slowing and without having to write all the multithreading code every time there is a performance issue with a list box/combo box.
These classes make it easier for all of us to work on each other's code and know how exactly they work since we all use the exact same interfaces throughout our products.
As far as organization goes, all of the DLL's are stored along with their source code on a shared development drive in the office that we all have access to. (We're a pretty small shop)
We split our libraries by function.
Commmon.Ui.dll has base classes for ui elements.
Common.Data.Dll is sort of a wrapper around Enterprise library Data access classes.
Common.Business is a dumping ground for other common classes that don't fit into one of those.
We create other specialized dlls as needs arise.

MEF vs. any IoC

Looking at Microsoft's Managed Extensibility Framework (MEF) and various IoC containers (such as Unity), I am failing to see when to use one type of solution over the other. More specifically, it seems like MEF handles most IoC type patterns and that an IoC container like Unity would not be as necessary.
Ideally, I would like to see a good use case where an IoC container would be used instead of, or in addition to, MEF.
When boiled down, the main difference is that IoC containers are generally most useful with static dependencies (known at compile-time), and MEF is generally most useful with dynamic dependencies (known only at run-time).
As such, they are both composition engines, but the emphasis is very different for each pattern. Design decisions thus vary wildly, as MEF is optimized around discovery of unknown parts, rather than registrations of known parts.
Think about it this way: if you are developing your entire application, an IoC container is probably best. If you are writing for extensibility, such that 3rd-party developers will be extending your system, MEF is probably best.
Also, the article in #Pavel Nikolov's answer provides some great direction (it is written by Glenn Block, MEF's program manager).
I've been using MEF for a while and the key factor for when we use it instead of IOC products is that we regularly have 3-5 implementations of a given interface sitting in our plugins directory at a given time. Which one of those implementations should be used is actually something that can only be decided at runtime.
MEF is good at letting you do just that. Typically, IOC is geared toward making sure you could swap out, for a cononical example, an IUserRepository based on ORM Product 1 for ORM Product 2 at some point in the future. However, most IOC solutions assume that there will only be one IUserRepository in effect at a given time.
If, however, you need to choose one based on the input data for a given page request, IOC containers are typically at a loss.
As an example, we do our permission checking and our validation via MEF plugins for a big web app I've been working on for a while. Using MEF, we can look at when the record's CreatedOn date and go digging for the validation plugin that was actually in effect when the record was created and run the record BOTH through that plugin AND the validator that's currently in effect and compare the record's validity over time.
This kind of power also lets us define fallthrough overrides for plugins. The apps I'm working on are actually the same codebase deployed for 30+ implementations. So, we've typically go looking for plugins by asking for:
An interface implementation that is specific to the current site and the specific record type in question.
An interface implementation that is specific to the current site, but works with any kind of record.
An interface that works for any site and any record.
That lets us bundle a set of default plugins that will kick in, but only if that specific implementation doesn't override it with customer specific rules.
IOC is a great technology, but really seems to be more about making it easy to code to interfaces instead of concrete implementations. However, swapping those implementations out is more of a project shift kind of event in IOC. In MEF, you take the flexibility of interfaces and concrete implementations and make it a runtime decision between many available options.
I am apologizing for being off-topic. I simply wanted to say that there are 2 flaws that render MEF an unnecessary complication:
it is attribute based which doesn't do any good to helping you figuring out why things work as they do. There's no way to get to the details burred in the internals of the framework to see what exactly is going on there. There is no way to get a tracing log or hook up to the resolving mechanisms and handle unresolved situations manually
it doesn't have any troubleshooting mechanism to figure out the reasons for why some parts get rejected. Despite pointing at a failing part it doesn't tell you why that part has failed.
So I am very disappointed with it. I spent too much time fighting windmills trying to bootstrap a few classes instead of working on the real problems. I convinced there is nothing better than the old-school dependency injection technique when you have full control over what is created, when, and can trace anything in the VS debugger. I wish somebody who advocates MEF presented a bunch of good reasons as to why would I choose it over plain DI.
I agree that MEF can be a fully capable IoC framework. In fact I'm writing an application right now based on using MEF for both extensibility and IoC. I took the generic parts of it and made it into a "framework" and open sourced it as its own framework called SoapBox Core in case people want to see how it works.
In particular, take a look at how the Host works if you want to see MEF in action.

What are the pros and cons for using a IOC container?

Using a IOC container will decrease the speed of your application because most of them uses reflection under the hood. They also can make your code more difficult to understand(?). On the bright side; they help you create more loosely coupled applications and makes unit testing easier. Are there other pros and cons for using/not using a IOC container?
If you're using an IOC container in a simple fashion, reflection is only used on startup - the application is wired up to start with, and then it runs as normal without any intervention from the container. Of course, if you're using the IOC to resolve dependencies after you've started running, that may be slightly different - although I'd still expect it to resolve lazily and cache unless you've got it configured to create new instances each time.
As for making the code harder to understand - quite the reverse! With the dependencies explicitly stated, it's much easier to understand each component, and the configuration file(s) make it clear how the whole application hangs together.
Well I suppose a con I've experienced is that some developers don't seem to be able to grasp IoC. We've had a few people who were against it for no reason other than that they didn't understand them. (Not saying that's a bad reason to be against something, not at all.)
It does add a bit of abstraction that always seems to manage to confuse someone or other, but I'd say the pros far outweigh the cons in most cases.
I think it is fair to say if you have expert understanding of how to use IOC and tend to write good code anyway, then IOC will make your code easier to understand on all but the smallest systems.
However if you are working somewhere where most classes/methods are very large and the concept of refactoring has not yet taken hold, then trying to use an IOC is likely just to make the software harder to understand. The IOC also has to be leant by everyone that programs on the project, so that may be a consideration.
I see IOC as icing on the cake; I like the icing but only on a nice cake. If the cake is not nice to start with, sort out the cake first.
As to the performance overhead of using IOC, I don’t see this as a problem in most cases. The overhead need not be large, and given the speed of today’s CPU most of you run time is likely to be data access anyway. If a IOC proved to slow for a given bit of code I would look at adding some caching of returned object, or removing the IOC just from that bit of code.
I believe the assumption about reduced execution speed is much the same kind of argument as "C is faster than C#/Java". While this statement may be true for specific operations and/or structurally simple tasks it is not the case the moment complexity rises.
The way DI-frameworks let you focus on object creation and dependencies creates more efficient systems when code size increases. For large applications I'm almost certain DI-framework based code will outperform any alternative solution. There's simply so little redundancy in the runtime that it's hard to make it more efficient! Most of the additional overhead is also just at first load.
Advanced DI containers also lets you do "scope" magic that you can only dream of without the container. Using scope-proxies, spring can do the following:
A Singleton
|
B Singleton
|
C Prototype (per-invocation)
|
D Singleton
|
E Session scope (web app)
|
F Singleton
Effectively you can have ten layers of singleton objects and all of a sudden something session scoped shows up.
Stuff like security can be injected in totally different manner than you would otherwise. There's often a classical paradox: Often the GUI layer needs to have intricate knowledge of the security permissions. Quite often the services layer also needs this, but often at a different level of detail (usually less detailed than the gui). The classical approach would be to send it around as parameters, put it on a threadlocal or to ask a service. With spring you can just inject it straght where you need it and no-one else needs to know.
This actually changes application development as a whole. I had a real hard time adjusting to this, but after this pain I see it is truly a lot closer to how things should be (as opposed to how we've learned to do it).
So I think DI frameworks have the potential of changing the way you make programs, with much further reaching implications than just DI. It's not just a glorified way of calling new.
I agree with all the answers so far.
What I'd like to add, is that it creates a bit of overhead, so it isn't really suited for small applications.
Mid-size and larger applications benefit the most from using IoC.
You might also check out this question for more information on pros and cons: Castle Windsor Are There Any Downsides?
In most circumstances you would not even notice performance penalty since for "singleton" objects all the initialization is performed once only. I would also argue that IoC makes it different to understand the code: on the contrary, IoC-style development forces you to create small coherent classes, which are in turn easier to grok.
If you are writing a business application, using an inversion-of-control and dependency-injection container (in conjunction with other agile practices and tools) will help you out in terms of productivity and reliability.
Moreover, your application will probably spend a vast majority of its CPU time waiting for resources or waiting for human interaction and doing nothing useful. Your application should have plenty of horsepower to spare for a few microseconds of reflection.
It seeks to reduce their dependency with IOC by ensuring object instance management within the application. The framework, rather than the developer, is in charge of creating and managing dependencies in your project.
The framework calls and runs our code when we write a block of code, and the entire event of passing control back to the framework is known as the Inversion of Control.
It enables the execution of a method separate from its
implementation.
It enables you to switch between multiple implementations with ease.
increases the modularity of the software
Because dependencies are reduced, it is simple to test and write.