Explicit transaction for entire request duration with automatic commit/rollback on errors (EF6, Web API2, NInject) - entity-framework

I'm starting a new Web API application, and I'm unsure how to handle transactions (and subsequent rollbacks in case of exceptions).
My overall goal is so have a single database connection per request, and have the entire thing wrapped in an explicit transaction.
I'll need an explicit transaction since I will be executing stored procedures aswell, and need to rollback any results from those if my application should throw any exceptions.
My plan was to re-use an approach I've used in MVC applications in the past which in rough terms was simply binding my database context to requestscope using ninject and then handling rollback/commit in the ondeactivation event.
Let's say I have a controller with two methods.
public class MyController : ApiController {
public MyController(IRepo repo) {
}
}
public string SimpleAddElement() {
_repo.Add(new MyModel());
}
public string ThisCouldBlowUp() {
// read from context
var foo = _repo.ReadFromDB();
// execute stored prodecure which changes some content
var res = _repo.StoredProcOperation();
// throw an exception due to bug/failsafe condition
if (res == 42)
throw Exception("Argh, an error occured");
}
}
My repo skeleton
public class Repo : IRepo {
public Repo(IMyDbContext context) {
}
}
From here, my plan was to simply bind the repositories using
kernel.Bind<IRepo>().To<Repo>();
and provide a single database context per request using
kernel.bind<IMyDbContext>().To<CreateCtx>()
.InRequestScope()
.OnDeactivate(FinalizeTransaction);
private IMyDbContext CreateCtx(IMyDbContext ctx) {
var ctx = new DbContext();
ctx.Database.BeginTransaction();
}
private void FinalizeTransaction(IMyDbContext ctx) {
if (true /* no errors logged on current HttpRequest.AllErrors */)
ctx.Commit();
else
ctx.Rollback();
}
Now, if I invoke SimpleAddElement from my browser FinalizeTransaction never gets invoked... So either I'm doing something wrong suddently, or missing something related to WebAPI pipeline
So how should I go about implementing a transactional "single DB session per request"-module?
What is best practise ?
If possible, I'd like the solution to support ASP vNext aswell
I suppose one potential solution could dropping the "ondeactivation" handler and implementing an HTTP module which will commit in Endrequest and rollback in Error... but there's just something about that I dont like.

You are missing an abstraction in your code. You execute business logic inside your controller, which is the wrong place. If you extract this logic to the business layer and hide it behind an abstraction, it will be trivial to wrap all business layer operations inside a transaction. Take a look at this article for some examples of this.

Related

How to create a kentico form that does not store the response

Is there any way in Kentico to have a user submit a form and then email the response but not actually save the answer to the related table?
As mentioned the emails from Kentico rely on the record being written to the DB before they trigger. Furthermore (unless I'm just unlucky) the only values you have access to are those stored in the table. I had thought that maybe you could mark the offending fields as Field without database representation, but sadly, the fields you may want will all be null - so best not to go down that route.
I took a slightly different approach to #trevor-j-fayas in that I used the BizFormItemEvents.Insert.Before event so that there is no trace of any log. It's a short hop from there to make use of an email template to make things look good. So my code looked as follows:
using CMS;
using CMS.DataEngine;
using CMS.EmailEngine;
using System;
[assembly: RegisterModule(typeof(FormGlobalEvents))]
public class FormGlobalEvents : Module
{
public FormGlobalEvents() : base("FormGlobalEvents")
{
}
protected override void OnInit()
{
CMS.OnlineForms.BizFormItemEvents.Insert.Before += Insert_Before;
}
private void Insert_Before(object sender, CMS.OnlineForms.BizFormItemEventArgs e)
{
var email = new EmailMessage();
email.From = e.Item.GetStringValue("ContactEmail", "null#foo.com");
email.Recipients = "no-reply#foo.com";
email.Subject = "Test from event handler (before save)";
email.PlainTextBody = "test" + DateTime.Now.ToLongTimeString();
EmailSender.SendEmail(email);
e.Cancel();
}
}
To me, it seems cleaner to not insert the record in the first place than delete it, but obviously that autoresponder etc. will only kick in automatically if you do save the record, so the choice is yours and ultimately depends on your preference.
Well, there's a couple different options, but the easiest is to simply delete the record after it's inserted. Use the Global Event Hooks to capture the BizFormItemEvent insert after, if it's your form, then delete it. Below is for Kentico 10:
using CMS;
using CMS.DataEngine;
using CMS.Forums;
using CMS.Helpers;
using CMS.IO;
using System.Net;
using System.Web;
// Registers the custom module into the system
[assembly: RegisterModule(typeof(CustomLoaderModule))]
public class CustomLoaderModule : Module
{
// Module class constructor, the system registers the module under the name "CustomForums"
public CustomLoaderModule()
: base("CustomLoaderModule")
{
}
// Contains initialization code that is executed when the application starts
protected override void OnInit()
{
base.OnInit();
CMS.OnlineForms.BizFormItemEvents.Insert.After += BizFormItem_Insert_After;
}
private void BizFormItem_Insert_After(object sender, CMS.OnlineForms.BizFormItemEventArgs e)
{
switch(e.Item.BizFormInfo.FormName)
{
case "YourFormNameHere":
e.Item.Delete();
break;
}
}
}
The other option would be to clone and modify the Online Form Web part to take the information, manually call the email and cancel the insert, but that's a lot of work when this is quicker.
Yes and no. The record is stored before the email notifications and autoresponders are sent out. Your best bet for this is to create a custom global event handler for the form submission(s) using the BizFormItemEvents.Insert.Before. This will call the event before the actual record is stored in the database. You can then cancel out of the event (which will not store the record) and send your email manually.
Handling global events
BizFormItemEvents Events

Two big question marks about CQRS

I'm a C# developer but I read nearly every tutorial about cqrs out there, doesn't matter if the language was Java, because I want to learn the structure and base of cqrs.
But now I think, the fact that I read so much tutorials is the problem because there are differences in the tutorials and now I'm confused and don't know which technique I have to use.
Two main questions are running wild in my head and maybe some of you can bring some clarity in there.
On the command side, where should I place the logic to call my ORM for example?
Some tutorials do that in the command handler (what is more logic to me) and some do it in the event handlers which will be fired by the command handler which in that case do only validation logic.
For the event store and to undo thinks, which data do I have to save into the db, some tutorials save the aggregate and some save the event model.
I hope that someone can explain me what pattern to use and why, maybe both in different scenarios, I don't know.
An practical example would be great. (Only pseudo code)
Maybe a User registration, RegisterTheUser command:
Things to do:
Check if the username is already in use
Add user to db
Send confirmation mail (In command or in the UserIsRegistered event?)
Fire event ConfirmationMailSended or only UserIsRegistered event?
Kind regards
EDIT:
Here is my current implementation (Simple)
public class RegisterTheUser : ICommand
{
public String Login { get; set; }
public String Password { get; set; }
}
public class RegisterTheUserHandler : IHandleCommand<RegisterTheUser, AccountAggregate>
{
public void Handle(AccountAggregate agg, RegisterTheUser command)
{
if (agg.IsLoginAlreadyInUse(command.Login))
throw new LoginIsAlreadyInUse();
agg.AddUserAccount(command);
CommandBus.Execute<SendMail>(x => { });
EventBus.Raise<UserIsRegistred>(x => { x.Id = agg.UserAccount.Id; });
}
}
public class UserIsRegistred : IEvent
{
public Guid Id { get; set; }
}
public class AccountAggregate : AggregateBase
{
public AccountAggregate(IUnitOfWork uow)
{
UnitOfWork = uow;
}
private IUnitOfWork UnitOfWork { get; set; }
public UserAccount UserAccount { get; set; }
public void AddUserAccount(RegisterTheUser command)
{
UserAccount = new UserAccount
{
Id = Guid.NewGuid(),
IsAdmin = false,
Login = command.Login,
Password = Crypto.Sha512Encrypt(command.Password)
};
UnitOfWork.UserAccountRepository.Add(UserAccount);
UnitOfWork.Commit();
}
public Boolean IsLoginAlreadyInUse(String login)
{
var result = UnitOfWork.UserAccountRepository.SingleOrDefault(x => x.Login == login);
return (result != null);
}
}
So a number of questions, but I'll take a stab at answering.
On the command side, where should I place the logic to call my ORM for
example?
Having this logic either in the command handler or in your event handler, to me, really depends on the type of system you're building. If you have a fairly simple system, you can probably have your persistence logic in your event handlers, which receive events raised by your domain. The thinking here is that your commands handled by the command handler will already have the information needed and your command handler ends up being not much more than a router. If you need more complexity in your command handler, such as dealing with sagas, long running transactions, or some additional layer of validation, then your command handler will use your persistence layer here to pull out data (and perhaps write data) and then route the command to the proper domain or issue more commands or raise events. So I believe it really depends on the level of complexity you're dealing with.
I tend to favor simplicity over complexity when starting out, and would probably look at having that logic in the event handler to begin with. Move to the command handler if your system is more complex.
For the event store and to undo thinks, which data do I have to save
into the db, some tutorials save the aggregate and some save the event
model
I'm not exactly sure what you're asking here, but if you're asking what should be stored in your event store, then I think a simple solution is the aggregate id, the aggregate type, the event with data (serialized) and the event type. Off the top of my head, that's probably the bare bones of what you'd need: based on the aggregate id you're working with, get all the events for that aggregate (in order raised) and then replay them to rebuild the aggregate. I don't think you need to save the aggregate unless there's some compelling reason to (which is always possible).
As for your request for a practical example and the steps you laid out, that's probably a question in and of itself, but my thoughts on that are:
Check if the user name is already in use
Depending on your application, you may want to do this from the read side in your controller (or whichever layer is raising commands) before you issue a command. Validate at that point, but you'd probably want to validate again before persisting it. You could do that in your event handler where it would probably catch an exception because you're violating a unique index in your database.
Add user to DB
Again, my thought is keep it simple and handle it in your event handler, since your domain is raising a UserIsRegistered event.
Send confirmation email
Your domain could raise the UserIsRegistered event and a second event handler (EmailHandler) would also subscribe to that event and send out the email.
ConfirmationMailSent event could be raised by the event handler, added to the event queue and handled accordingly. I guess I'm not sure what you want to happen here.
But, hopefully this helps a bit.

Contextual serialization from WebApi endpoint based on permissions

I am using the Asp.Net Web Api. I would like to be able to filter out certain fields on the response objects based on the connected clients access rights.
Example:
class Foo
{
[AccessFilter("Uberlord")]
string Wibble { get; set; }
string Wobble { get; set; }
}
When returning data the filed Wibble should only be returned if the current users context can satisfy the value of "Uberlord".
There are three avenues that I am exploring but I have not got a working solution:
A custom WebApi MediaTypeFormatter.
A custom json.net IContractResolver.
Some sort of AOP wrapper for controllers that manipulates the response object
My issue with these are:
The custom formatter does not feel like the right place to do it but might be the only option.
The custom json serializer would not have access to the current context so I would have to work that out.
With the first two options you would require specific implementations for each response format, json, xml, some custom format, etc. This would mean that if another response type is supported then a custom formatter / serializer is required to prevent sensitive data leaking.
The AOP controller wrapper would require a lot of reflection.
An additional bonus would be to strip out values from the fields on an inbound request object using the same mechanism.
Have I missed an obvious hook? Has this been solved by another way?
It was actually a lot simpler than I first thought. What I did not realise is that the DelegatingHandler can be used to manipulate the response as well as the request in the Web Api Pipeline.
Lifecycle of an ASP.NET Web API Message
Delegating Handler
Delegating handlers are an extensibility point in the message pipeline allowing you to massage the Request before passing it on to the rest of the pipeline. The response message on its way back has to pass through the Delegating Handler as well, so any response can also be monitored/filtered/updated at this extensibility point.
Delegating Handlers if required, can bypass the rest of the pipeline too and send back and Http Response themselves.
Example
Here is an example implementation of a DelegatingHandler that can either manipulate the response object or replace it altogether.
public class ResponseDataFilterHandler : DelegatingHandler
{
protected override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
return base.SendAsync(request, cancellationToken)
.ContinueWith(task =>
{
var response = task.Result;
//Manipulate content here
var content = response.Content as ObjectContent;
if (content != null && content.Value != null)
{
((SomeObject)content.Value).SomeProperty = null;
}
//Or replace the content
response.Content = new ObjectContent(typeof(object), new object(), new JsonMediaTypeFormatter());
return response;
});
}
}
Microsoft article on how to implement a delegating handler and add it to the pipeline.HTTP Message Handlers in ASP.NET Web API
I have a similar question in the works over here: ASP.NET WebAPI Conditional Serialization based on User Role
A proposed solution that I came up with is to have my ApiController inherit from a BaseApiController which overrides the Initalize function to set the appropriate formatter based on the user's role. I haven't decided if I will go this way yet, but perhaps it will work for you.
protected override void Initialize(System.Web.Http.Controllers.HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
// If the user is in a sensitive-data access role
controllerContext.Configuration.Formatters.Add(/*My Formatter*/);
// Otherwise use the default ones added in global app_start that defaults to remove sensitive data
}

Proper way of using MVVM Light Messenger

What is the proper way to use Messenger class ?
I know it can be used for ViewModels/Views communications, but is it a good approach to use it in for a technical/business service layer ?
For example, a logging/navigation service registers for some messages in the constructors and is aware when these messages occurs in the app. The sender (ViewModel ou Service) does not reference the service interface but only messenger for sending messages. Here is a sample service :
using System;
using System.Windows;
using System.Windows.Navigation;
using Microsoft.Phone.Controls;
using App.Service.Interfaces;
using GalaSoft.MvvmLight.Messaging;
namespace App.Service
{
public class NavigationService : INavigationService
{
private PhoneApplicationFrame _mainFrame;
public event NavigatingCancelEventHandler Navigating;
public NavigationService()
{
Messenger.Default.Register<NotificationMessage<Uri>>(this, m => { this.NavigateTo(m.Content); });
}
public void NavigateTo(Uri pageUri)
{
if (EnsureMainFrame())
{
_mainFrame.Navigate(pageUri);
}
}
public void GoBack()
{
if (EnsureMainFrame()
&& _mainFrame.CanGoBack)
{
_mainFrame.GoBack();
}
}
private bool EnsureMainFrame()
{
if (_mainFrame != null)
{
return true;
}
_mainFrame = Application.Current.RootVisual as PhoneApplicationFrame;
if (_mainFrame != null)
{
// Could be null if the app runs inside a design tool
_mainFrame.Navigating += (s, e) =>
{
if (Navigating != null)
{
Navigating(s, e);
}
};
return true;
}
return false;
}
}
}
For me, the main use of a messenger is because it allows for communication between viewModels. Lets say you have a viewmodel that is used to provide business logic to a search function and 3 viewmodels on your page/window that want to process the search to show output, the messenger would be the ideal way to do this in a loosely-bound way.
The viewmodel that gets the search data would simply send a "search" message that would be consumed by anything that was currently registered to consume the message.
The benefits here are:
easy communication between viewmodels without each viewmodel having to know about each other
I can swap out the producer without affecting a consumer.
I can add more message consumers with little effort.
It keeps the viewmodels simple
Edit:
So, what about services?
ViewModels are all about how to present data to the UI. They take your data and shape it into something that can be presented to your View. ViewModels get their data from services.
A service provides the data and/or business logic to the ViewModel. The services job is to service business model requests. If a service needs to communicate/use other services to do its job these should be injected into the service using dependency injection. Services would not normally communicate with each other using a messenger. The messenger is very much about horizontal communication at the viewmodel level.
One thing I have seen done is to use a messenger as a mediator, where instead of injecting the service directly into a viewmodel the messenger is injected into the viewmodel instead. The viewmodel subscribes to an event and receives events containing models from the event. This is great if you're receiving a steady flow of updates or you're receiving updates from multiple services that you want to merge into a single stream.
Using a messenger instead of injecting a service when you're doing request/response type requests doesn't make any sense as you'll have to write more code to do this that you'd have to write just injecting the service directly and it makes the code hard to read.
Looking at your code, above. Imagine if you had to write an event for each method on there (Navigate, CanNavigate, GoBack, GoForward, etc). You'd end up with a lot of messages. Your code would also be harder to follow.

ASP.NET MVC2 AsyncController: Does performing multiple async operations in series cause a possible race condition?

The preamble
We're implementing a MVC2 site that needs to consume an external API via https (We cannot use WCF or even old-style SOAP WebServices, I'm afraid). We're using AsyncController wherever we need to communicate with the API, and everything is running fine so far.
Some scenarios have come up where we need to make multiple API calls in series, using results from one step to perform the next.
The general pattern (simplified for demonstration purposes) so far is as follows:
public class WhateverController : AsyncController
{
public void DoStuffAsync(DoStuffModel data)
{
AsyncManager.OutstandingOperations.Increment();
var apiUri = API.getCorrectServiceUri();
var req = new WebClient();
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["result"] = e.Result;
AsyncManager.OutstandingOperations.Decrement();
};
req.DownloadStringAsync(apiUri);
}
public ActionResult DoStuffCompleted(string result)
{
return View(result);
}
}
We have several Actions that need to perform API calls in parallel working just fine already; we just perform multiple requests, and ensure that we increment AsyncManager.OutstandingOperations correctly.
The scenario
To perform multiple API service requests in series, we presently are calling the next step within the event handler for the first request's DownloadStringCompleted. eg,
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["step1"] = e.Result;
OtherActionAsync(e.Result);
AsyncManager.OutstandingOperations.Decrement();
}
where OtherActionAsync is another action defined in this same controller following the same pattern as defined above.
The question
Can calling other async actions from within the event handler cause a possible race when accessing values within AsyncManager?
I tried looking around MSDN but all of the commentary about AsyncManager.Sync() was regarding the BeginMethod/EndMethod pattern with IAsyncCallback. In that scenario, the documentation warns about potential race conditions.
We don't need to actually call another action within the controller, if that is off-putting to you. The code to build another WebClient and call .DownloadStringAsync() on that could just as easily be placed within the event handler of the first request. I have just shown it like that here to make it slightly easier to read.
Hopefully that makes sense! If not, please leave a comment and I'll attempt to clarify anything you like.
Thanks!
It turns out the answer is "No".
(for future reference incase anyone comes across this question via a search)