Custom Membership Provider and Domain-Driven-Design - asp.net-mvc-2

I have a concern where I am writing a custom membership provider, but I'm not sure where to put it. I don't really have any code to show you, but basically the provider needs access to System.Web.Security in order to inherit the class, but it also needs data access (i.e. a connection string + LINQ to SQL) to do simple tasks such as ValidateUser.
How can I write a membership provider that adheres to the principles of DDD that I've read about in Pro ASP.NET MVC2 Framework by Apress? My one thought was to write another class in my domain project which does all the "work" related to database stuff. In essence I would have double the number of methods. Also, can this work with dependency injection (IoC)?
Hope this isn't too general ...
Look forward to the hive-mind's responses!
Edit: I just noticed in a default MVC2 project there is an AccountController which has a wrapper around an IMembershipService. Is this where my answer lies? The AccountController seems to have no database access component to it.

Asp.net user management features are super invasive.
They even spam database with profile tables and what not.
When I had to implement users management of my application, I successfully avoided all that mess and still was able to use asp.net in-built roles, user identities etc. Moving away from all that though cause my domain is getting smart enough to decide what can be seen and done so it makes no sense to duplicate that in UI client.
So... yeah. Still have zero problems with this approach. Haven't changed anything for ~4 months.
Works like a charm.

Related

How many layers your small REST system should have?

I am building simple REST deployable using Spring Boot. Decided to create it by using failing acceptance test first followed with TDD until its green.
My module is pretty simple, I have 3 API's:
Retrieving list of data from datastore.
Adds item to datastore.
Deletes item from datastore.
I feel like it is good idea to abstract datastore and have maybe backed by Map data structure for testing purposes and use it with either NoSQL or SQL db if I want to for deployments/releases and end to end testing.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
So standard approach would be controller->service->repository. In my case service does not do much(possible some exception handling but not more) and I will end up with interface and implementation as an extra as well as few more lines of code. I fell like going for controller->repository solution in my situation but it is not a practice I have seen and not sure how others would see it.
What's the best way to implement this sort of system?
I feel like it is good idea to abstract datastore
You are right. The abstraction is called 'Repository' in DDD (Domain Driven Design) for example.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
I'm pretty sure there are data that you want to validate. So you should have a layer in the middle (e.g. the domain layer) which will be in charge of this validation.
Even so, if you feel like your application is simple and doesn't require such layers, go without. You will have less supple design, but more simplicity at first. Be careful: while evolving your app, you could run into trouble.
Hope this will help.
This is rather an opinion based question, but if you are asking whether a 3 layer architecture is a must, to that I say no. Be pragmatic, if you don' see a reason for a class/layer/module to exist, it does not need to exist.
A repository has a purpose (to store/retrieve), and the api layer has a purpose, to offer those things through HTTP.
Here is an article for building small services with the sparkframework: https://dzone.com/articles/building-simple-restful-api

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

Membership.Provider And Asp.NET MVC2: Do I Really Need it?

I see a lot of articles and posts on how to create a custom MembershipProvider, but haven't found any explanation as to why I must/should use it in my MVC2 web app. Apart from "Hey, security is hard!", what are critical parts of the whole MembershipProvider subsystem that I should know about that I don't, because I've only read about how to override parts of it? Is there some "behind the scenes magic" that I don't see and will have to implement myself? Is there some attribute or other piece of functionality that will trip over itself without a properly setup MembershipProvider?
I am building a web app, using a DDD approach, so the way I see it, I have a User entity and a Group entity. I don't need to customize ValidateUser() under the provider; I can just have it as a method on my User entity. I have to have a User object anyways, to implement things not under the MemebrshipProvider?
So, what gives? :)
No, you don't need it. I have sites that use it and sites that don't. One reason to use it is that plumbing is already there for it in ASP.NET and you can easily implement authentication by simply providing the proper configuration items (and setting up the DB or AD or whatever).
A RoleProvider, on the other hand, comes in very handy when using the built-in AuthorizeAttributes and derivatives. Implementing a RoleProvider will save you a fair amount of custom programming on the authorization side.

Need advice on removing zend framework dependency

I'm in the middle of converting an existing app built on top of zend framework to work as a plugin within wordpress as opposed to the standalone application it currently is.
I've never really used zend so I've had to learn about it in order to know where to begin. I must say that at first I didn't think much of zend, but it's funny because the more I understand how it works the more I keep questioning why I'd want to remove dependency when it's a clearly well thought out framework. Then I'm reminded that it's because of wordpress.
Now I already know there are WP plugins to make zend play nice with WP. In fact I'm aleady using a zend framework plugin just to get the app functional within the WP admin area which is allowing me to review code, modify code, refresh the browser, review changes, debug code, again and again.
Anyway, I really don't have a specific question but instead I'm looking for advice from any zend masters out there to offer advice on how to best go about a task like this one.... so any comments, advice, examples or suggestions would be super.
One area I'm a little stuck on is converting parts of zend->db calls to work as wpdb calls instead... specifically the zend->db->select.... not sure what to do with that one.
Also on how to handle all the URL routing with automatic calls to "whatverAction" within thier respective controllers files.
Any help would be great! Thanks
You're probably facing an uphill battle trying to get some of the more major components of ZF to work in harmony with Wordpress. It sounds like you've got a full MVC app that you're trying to integrate into a second app that has very different architecture.
You probably want to think about which components handle which responsibilities. Wordpress has it's own routing and controller system that revolves around posts, pages and 'The Loop'. This is entirely different from Zend's Action Controllers and routing system.
It's possible you could write a WP hook to evaluate every incoming request and decide if it should be handled by WP or a ZF controller. However, it is doubtful you would be able to replace WP's routing system outright with ZF's or vice versa.
Same idea, where Zend_Db is concerned. There's nothing stopping you from using Zend_Db to access Wordpress's database, but trying to somehow convert or adapt Zend_db calls into wpdb calls sounds painful. If you have a large model layer, you probably want to hang on to it, and find a way to translate data from those models into the posts/pages conventions that Wordpress uses.
Personally, I would use ZF to build a robust business layer that can be queried through an object model via a Wordpress plugin, and then rely on Wordpress to do the routing and handle the views.
Zend_DB_Select is simple SQL query (but created using objects) that can be used like any other query. Just turn it into string. Ex.:
mysql_query((string)$zendDbSelectObject);

Ado Entity Best Practice

I’m just working on this interesting thing with ADO.net entities and need your opinion. Often a solution would be created to provide a service (WCF or web service) to allow access to the DB via the entity framework, but I working on an application that runs internally and has domain access pretty much all the time. The question is if it’s good practice to create a data service for the application to interface from or could I go from the WPF application directly to the entity framework. What’s the best practice in this case and what are some of the pros’ and cons’ to the two different approach.
By using entity framework directly, do you mean that the WPF application would connect to the database, or that it would still use services but re-use the entities?
If it's the first approach, I tend to be against this because it means multiple clients connecting to the database, which a) is an additional security concern, b) could make it more expensive from a licensing perspective, and c) means you don't get the benefits of connection pooling. Databases are the most expensive things to scale so I'd try to design the solution to use services and reduce the pressure on the database. But there are times when it's appropriate. One thing I've noticed is that applications which do start out connecting directly tend to get refactored to go via a service later; it seldom happens the other way around. But it might also be a case of YAGNI.
If it's the second approach, I think that's fine. It's common for people looking at WCF to think "service oriented" - that is, there should be a strict contract between services and things shouldn't be shared. But a "multi-tier" application, which is only designed to have one client, is also a perfectly valid architecture and doesn't need to be so decoupled. In that case, reusing the entities on both sides of the service boundary should be fine. However, I'm not sure how easy this is to do with EF specifically, since I haven't used it except in experiments.
It really depends on the level of complexity and the required level of coupling/modularity. I think a good compromise would be to create a EF model in it's own library or the like with a simple level of abstraction. In that scenario if you chose to change the model to use an exposed service instead of direct access it shouldn't be a big deal to refactor existing code and the new service could utilize the existing library.