ASP.NET MVC2 AsyncController: Does performing multiple async operations in series cause a possible race condition? - asp.net-mvc-2

The preamble
We're implementing a MVC2 site that needs to consume an external API via https (We cannot use WCF or even old-style SOAP WebServices, I'm afraid). We're using AsyncController wherever we need to communicate with the API, and everything is running fine so far.
Some scenarios have come up where we need to make multiple API calls in series, using results from one step to perform the next.
The general pattern (simplified for demonstration purposes) so far is as follows:
public class WhateverController : AsyncController
{
public void DoStuffAsync(DoStuffModel data)
{
AsyncManager.OutstandingOperations.Increment();
var apiUri = API.getCorrectServiceUri();
var req = new WebClient();
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["result"] = e.Result;
AsyncManager.OutstandingOperations.Decrement();
};
req.DownloadStringAsync(apiUri);
}
public ActionResult DoStuffCompleted(string result)
{
return View(result);
}
}
We have several Actions that need to perform API calls in parallel working just fine already; we just perform multiple requests, and ensure that we increment AsyncManager.OutstandingOperations correctly.
The scenario
To perform multiple API service requests in series, we presently are calling the next step within the event handler for the first request's DownloadStringCompleted. eg,
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["step1"] = e.Result;
OtherActionAsync(e.Result);
AsyncManager.OutstandingOperations.Decrement();
}
where OtherActionAsync is another action defined in this same controller following the same pattern as defined above.
The question
Can calling other async actions from within the event handler cause a possible race when accessing values within AsyncManager?
I tried looking around MSDN but all of the commentary about AsyncManager.Sync() was regarding the BeginMethod/EndMethod pattern with IAsyncCallback. In that scenario, the documentation warns about potential race conditions.
We don't need to actually call another action within the controller, if that is off-putting to you. The code to build another WebClient and call .DownloadStringAsync() on that could just as easily be placed within the event handler of the first request. I have just shown it like that here to make it slightly easier to read.
Hopefully that makes sense! If not, please leave a comment and I'll attempt to clarify anything you like.
Thanks!

It turns out the answer is "No".
(for future reference incase anyone comes across this question via a search)

Related

{guzzle-services} How to use middlewares with GuzzleClient client AS OPPOSED TO directly with raw GuzzleHttp\Client?

My middleware need is to:
add an extra query param to requests made by a REST API client derived from GuzzleHttp\Command\Guzzle\GuzzleClient
I cannot do this directly when invoking APIs through the client because GuzzleClient uses an API specification and it only passes on "legal" query parameters. Therefore I must install a middleware to intercept HTTP requests after the API client prepares them.
The track I am currently on:
$apiClient->getHandlerStack()-push($myMiddleware)
The problem:
I cannot figure out the RIGHT way to assemble the functional Russian doll that $myMiddleware must be. This is an insane gazilliardth-order function scenario, and the exact right way the function should be written seems to be different from the extensively documented way of doing things when working with GuzzleHttp\Client directly. No matter what I try, I end up having wrong things passed to some layer of the matryoshka, causing an argument type error, or I end up returning something wrong from a layer, causing a type error in Guzzle code.
I made a carefully weighted decision to give up trying to understand. Please just give me a boilerplate solution for GuzzleHttp\Command\Guzzle\GuzzleClient, as opposed to GuzzleHttp\Client.
The HandlerStack that is used to handle middleware in GuzzleHttp\Command\Guzzle\GuzzleClient can either transform/validate a command before it is serialized or handle the result after it comes back. If you want to modify the command after it has been turned into a request, but before it is actually sent, then you'd use the same method of Middleware as if you weren't using GuzzleClient - create and attach middleware to the GuzzleHttp\Client instance that is passed as the first argument to GuzzleClient.
use GuzzleHttp\Client;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Command\Guzzle\GuzzleClient;
use GuzzleHttp\Command\Guzzle\Description;
class MyCustomMiddleware
{
public function __invoke(callable $handler) {
return function (RequestInterface $request, array $options) use ($handler) {
// ... do something with request
return $handler($request, $options);
}
}
}
$handlerStack = HandlerStack::create();
$handlerStack->push(new MyCustomMiddleware);
$config['handler'] = $handlerStack;
$apiClient = new GuzzleClient(new Client($config), new Description(...));
The boilerplate solution for GuzzleClient is the same as for GuzzleHttp\Client because regardless of using Guzzle Services or not, your request-modifying middleware needs to go on GuzzleHttp\Client.
You can also use
$handler->push(Middleware::mapRequest(function(){...});
Of sorts to manipulate the request. I'm not 100% certain this is the thing you're looking for. But I assume you can add your extra parameter to the Request in there.
private function createAuthStack()
{
$stack = HandlerStack::create();
$stack->push(Middleware::mapRequest(function (RequestInterface $request) {
return $request->withHeader('Authorization', "Bearer " . $this->accessToken);
}));
return $stack;
}
More Examples here: https://hotexamples.com/examples/guzzlehttp/Middleware/mapRequest/php-middleware-maprequest-method-examples.html

Facebook Messenger Bot Proactive/Push Notifications using Azure

I am building a bot for for Facebook Messenger using Microsoft Bot Framework. I am planning to use CosmosDB for State Management and also as my backend data store. (I am not stuck to CosmosBD and can use any other store if needed)
I need to send daily/weekly proactive messages(push notifications) to users based on their time preference. I will capturing their time preference when they first interact with the bot.
What is the best way to deliver these notifications?
As I will be storing these preferences in CosmosDB, I am thinking using ComosDB trigger of creating an Azure Function and schedule it based on the user time preference. This Azure function will make a call to my webhook which will deliver these messages. If requried, I will change Function schedule when a user changes his/her preference.
My questions are:
Is this a good approach?
Are there any other alternatives (Notifications Hub?)
I should be able to set specific times for notifications (like at the top of the hour or something like that), does it make sense to schedule an Azure Function to run at these hours rather than creating a function based on user preference (I can actually combine these two approaches too)
Thank you in advance.
First, I don't think there's any "right" answer to be given here; it's going to depend a lot on your domain's specific needs. Scale is going to play a major factor in the design of this. Will you have 100 users? 10000 users? 1mil users? I'm going to assume you want to design for maximum scale up front, but it could be overkill.
First, based on what you've described, I don't think a CosmosDB trigger is necessarily the solution to your problem because that's only going to fire when the preference data is created/updated. I assume that, from that point forward, your function needs to continuously fire at the time slot they've opted into, correct?
So let's pretend you let people choose from the 24hrs in the day. A naïve approach would be to simply use a scheduled trigger that fires up every hour, queries the CosmosDB for all the documents where the preference is set to that particular hour and then begins sending out notifications from there. The problem is how you scale from there and deal with issues of idempotency in the face of failures.
First off, a timer trigger only ever spins up one instance. If you were to just go query the CosmosDB documents and start processing them one by one in the scope of that single trigger, you'd hit a ceiling relatively quickly on how many notifications you can scale to. Instead what you'd want to do is use that timer trigger to fan out the notifications to as many "worker" function instances as possible. The timer trigger can act as the orchestrator in the sense that it can own the query against the CosmosDB and then turn each document result it finds for that particular notification time window into a message that it places on a queue to be processed by a separate function which will scale out on its own.
There are actually a couple ways you can accomplish this with Azure Functions, it really depends on how early an adopter of technology you are comfortable with being.
The first is what I would call the "manual" way which would be done by simply using the existing Azure Storage Queue extension by taking an IAsyncCollector<YourNotificationWorkerMessage> as a parameter to the timer function that's bound to the worker queue and then pumping out the messages through that. Then you write a second companion function which uses a QueueTrigger, bind it to that same queue, and it will take care of processing each message. This second function is where you get the scaling, enabling process all of the queued messages as quickly as possible based on whatever scaling parameters you choose to configure. This is the "simplest" approach
The second approach would be to adopt the newer Durable Functions extension. With that model, you don't have to directly think about creating a worker queue. You simply kick off a new instance of your orchestrator function from the timer function and the orchestrator fans out the work by invoking N "concurrent" calls to an action for each notification. Now, it happens to distribute those calls using queues under the covers, but that's an implementation detail that you need no longer maintain yourself. Additionally, if the work of delivering the notification requires more involved work and/or retry logic, you might actually consider using a sub-orchestration instead of a simple action. Finally, another added benefit of this approach, is that you can "fan back in" to your main orchestrator function once all the notifications are delivered to do some follow up work... even if that's simply some kind of event logging that the notification cycle has completed for this hour.
Now, the challenge with either of these approach is actually dealing with failure in initially fetching the candidates for notification from CosmosDB, paging through the results and making sure you actually fan all of them out in an idempotent manner. You need to deal with possible hiccups as you page and you need to deal with the fact that your whole function could be torn down and you might have to restart. Perhaps on the initial run of the 8AM notifications you got through page 273 out of 371 pages and then you got hit with a complete network connectivity fail or the VM your function was running on suffered a power failure. You could resume, but you'd need to know that you left off on page 273 and that you actually processed the 27th record out of that page and start from there. Otherwise, you risk sending double notifications to your users. Maybe that's something you can accept, maybe it's not. Maybe you're ok with the 27 notifications on that page being duplicated as long as the first 272 pages aren't. Again, this is something you need to decide for your domain, but if you want to avoid this issue your orchestrator function will need to track its progress to ensure that it doesn't send out dupes. Again I would say Durable Functions has a leg up here as it comes with the ability to configure retries. Maintaining the state of a particular run is left up to the author in either approach though.
I use pro-active dialog extensively with botframwork and messenger without any issue. During your facebook approval process you simply need to inform them you will be sending notifications trough messenger with your bot. Usually if you use it to inform your user and stay away from promotional content you should be fine.
I also use azure function to trigger the pro-active dialog from a custom controller endpoint.
Bellow sample code for azure function:
public static void Run(TimerInfo notificationTrigger, TraceWriter log)
{
try
{
//Serialize request object
string timerInfo = JsonConvert.SerializeObject(notificationTrigger);
//Create a request for bot service with security token
HttpRequestMessage hrm = new HttpRequestMessage()
{
Method = HttpMethod.Post,
RequestUri = new Uri(NotificationEndPointUrl),
Content = new StringContent(timerInfo, Encoding.UTF8, "application/json")
};
hrm.Headers.Add("Authorization", NotificationApiKey);
log.Info(JsonConvert.SerializeObject(hrm));
//Call service
using (var client = new HttpClient())
{
Task task = client.SendAsync(hrm).ContinueWith((taskResponse) =>
{
HttpResponseMessage result = taskResponse.Result;
var jsonString = result.Content.ReadAsStringAsync();
jsonString.Wait();
if (result.StatusCode != System.Net.HttpStatusCode.OK)
{
//Throw what ever problem as an exception with details
throw new Exception($"AzureFunction - ERRROR - HTTP {result.StatusCode}");
}
});
task.Wait();
}
}
catch (Exception ex)
{
//TODO log
}
}
Bellow sample code for starting the pro-active dialog:
public static async Task Resume<T, R>(string resumptionCookie) where T : IDialog<R>, new()
{
//Deserialize reference to conversation
ConversationReference conversationReference = JsonConvert.DeserializeObject<ConversationReference>(resumptionCookie);
//Generate message from bot to user
var message = conversationReference.GetPostToBotMessage();
var builder = new ContainerBuilder();
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
//From a cold start the service is not yet authenticated with dev bot azure services
//We thus must trust endpoint url.
if (!MicrosoftAppCredentials.IsTrustedServiceUrl(message.ServiceUrl))
{
MicrosoftAppCredentials.TrustServiceUrl(message.ServiceUrl, DateTime.MaxValue);
}
var botData = scope.Resolve<IBotData>();
await botData.LoadAsync(CancellationToken.None);
//This is our dialog stack
var task = scope.Resolve<IDialogTask>();
T dialog = scope.Resolve<T>(); //Resolve the dialog using autofac
try
{
task.Call(dialog.Void<R, IMessageActivity>(), null);
await task.PollAsync(CancellationToken.None);
}
catch (Exception ex)
{
//TODO log
}
finally
{
//flush dialog stack
await botData.FlushAsync(CancellationToken.None);
}
}
}
Your dialog needs to be registered in autofac.
Your resumptionCookie needs to be saved in your db.
You might want to check FB policy regarding proactive messages
There’s a 24h limit but it might not be totally screwed in your case
https://developers.facebook.com/docs/messenger-platform/policy/policy-overview#standard_messaging

Explicit transaction for entire request duration with automatic commit/rollback on errors (EF6, Web API2, NInject)

I'm starting a new Web API application, and I'm unsure how to handle transactions (and subsequent rollbacks in case of exceptions).
My overall goal is so have a single database connection per request, and have the entire thing wrapped in an explicit transaction.
I'll need an explicit transaction since I will be executing stored procedures aswell, and need to rollback any results from those if my application should throw any exceptions.
My plan was to re-use an approach I've used in MVC applications in the past which in rough terms was simply binding my database context to requestscope using ninject and then handling rollback/commit in the ondeactivation event.
Let's say I have a controller with two methods.
public class MyController : ApiController {
public MyController(IRepo repo) {
}
}
public string SimpleAddElement() {
_repo.Add(new MyModel());
}
public string ThisCouldBlowUp() {
// read from context
var foo = _repo.ReadFromDB();
// execute stored prodecure which changes some content
var res = _repo.StoredProcOperation();
// throw an exception due to bug/failsafe condition
if (res == 42)
throw Exception("Argh, an error occured");
}
}
My repo skeleton
public class Repo : IRepo {
public Repo(IMyDbContext context) {
}
}
From here, my plan was to simply bind the repositories using
kernel.Bind<IRepo>().To<Repo>();
and provide a single database context per request using
kernel.bind<IMyDbContext>().To<CreateCtx>()
.InRequestScope()
.OnDeactivate(FinalizeTransaction);
private IMyDbContext CreateCtx(IMyDbContext ctx) {
var ctx = new DbContext();
ctx.Database.BeginTransaction();
}
private void FinalizeTransaction(IMyDbContext ctx) {
if (true /* no errors logged on current HttpRequest.AllErrors */)
ctx.Commit();
else
ctx.Rollback();
}
Now, if I invoke SimpleAddElement from my browser FinalizeTransaction never gets invoked... So either I'm doing something wrong suddently, or missing something related to WebAPI pipeline
So how should I go about implementing a transactional "single DB session per request"-module?
What is best practise ?
If possible, I'd like the solution to support ASP vNext aswell
I suppose one potential solution could dropping the "ondeactivation" handler and implementing an HTTP module which will commit in Endrequest and rollback in Error... but there's just something about that I dont like.
You are missing an abstraction in your code. You execute business logic inside your controller, which is the wrong place. If you extract this logic to the business layer and hide it behind an abstraction, it will be trivial to wrap all business layer operations inside a transaction. Take a look at this article for some examples of this.

Performing Explicit Route Mapping based upon Web Api v2 Attributes

I'm upgrading a custom solution where I can dynamically register and unregister Web Api controllers to use the new attribute routing mechanism. However, it seems to recent update to RTM break my solution.
My solution exposes a couple of Web Api controllers for administration purposes. These are registered using the new HttpConfigurationExtensions.MapHttpAttributeRoutes method call.
The solution also allows Web Api controllers to be hosted in third-party assemblies and registered dynamically. At this stage, calling HttpConfigurationExtensions.MapHttAttributeRoutes a second time once the third-party controller is loaded would raise an exception. Therefore, my solution uses reflection to inspect the RoutePrefix and Route attributes and register corresponding routes on the HttpConfiguration object.
Unfortunately, calling the Web Api results in the following error:
"No HTTP resource was found that matches the request URI".
Here is a simple controller that I want to use:
[RoutePrefix("api/ze")]
public sealed class ZeController : ApiController
{
[HttpGet]
[Route("one")]
public string GetOne()
{
return "One";
}
[HttpGet]
[Route("two")]
public string GetTwo()
{
return "Two";
}
[HttpPost]
[Route("one")]
public string SetOne(string value)
{
return String.Empty;
}
}
Here is the first solution I tried:
configuration.Routes.MapHttpRoute("ZeApi", "api/ze/{action}");
Here is the second solution I tried:
var type = typeof(ZeController);
var routeMembers = type.GetMethods().Where(m => m.IsPublic);
foreach (MethodInfo method in routeMembers)
{
var routeAttribute = method.GetCustomAttributes(false).OfType<RouteAttribute>().FirstOrDefault();
if (routeAttribute != null)
{
string controllerName = type.Name.Substring(0, type.Name.LastIndexOf("Controller"));
string routeTemplate = string.Join("/", "api/Ze", routeAttribute.Template);
configuration.Routes.MapHttpRoute(method.Name, routeTemplate);
}
}
I also have tried a third solution, whereby I create custom classes that implement IHttpRoute and trying to register them with the configuration to no avail.
Is it possible to use legacy-style route mapping based upon the information contained in the new routing attributes ?
Update
I have installed my controller in a Web Application in order to troubleshoot the routing selection process with the Web Api Route Debugger. Here is the result of the screenshot:
As you can see, the correct action seems to be selected, but I still get a 404 error.
Update2
After further analysis, and per Kiran Challa's comment below, it seems that the design of Web Api prevents mixing attribute routing and conventional routing, and that what I want to do is not possible using this approach.
I have created a custom attribute [RouteEx] that serves the same purpose of the Web Api [Route] attribute, and now my code works perfectly.
I guess, since this is not possible using the conventional attribute routing, none of the answers on this question could legitimately be consisered valid. So I'm not nominating an answer just yet.
You shouldn't be required to use reflection and inspect the attribute-routing based attributes yourself. Attribute routing uses existing Web API features to get list of controllers to scan through.
Question: Before the switch to attribute routing, how were you loading these assemblies having the
controllers?
If you were doing this by IAssembliesResolver service, then this solution should work even with attribute routing and you should not be needing to do anything extra.
Regarding your Update: are you calling MapHttpAttributeRoutes?

Creating a REST service in Sitecore

I'm trying to build a REST service in a Sitecore root. My application start looks like this:
void Application_Start(object sender, EventArgs e)
{
RouteTable.Routes.MapHttpRoute(
name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = System.Web.Http.RouteParameter.Optional });
}
And my URL looks like this:
http://{mydomain}/api/books
I have the correct controller and all that.
But Sitecore keeps redirecting me to the 404 page. I've added the path to the IgnoreUrlPrefixes node in the web.config, but to no avail. If I had to guess, I'd think that Sitecore's handler is redirecting before my code gets the chance to execute, but I really don't know.
Does anybody have any idea what might be wrong?
Your assessment is correct. You need a processor in the httpRequestBegin pipeline to abort Sitecore's processing. See the SystemWebRoutingResolver in this answer:
Sitecore and ASP.net MVC
It's also described in this article:
http://www.sitecore.net/Community/Technical-Blogs/John-West-Sitecore-Blog/Posts/2010/10/Sitecore-MVC-Crash-Course.aspx
But I'll include the code here as well. :)
public class SystemWebRoutingResolver : Sitecore.Pipelines.HttpRequest.HttpRequestProcessor
{
public override void Process(Sitecore.Pipelines.HttpRequest.HttpRequestArgs args)
{
RouteData routeData = RouteTable.Routes.GetRouteData(new HttpContextWrapper(args.Context));
if (routeData != null)
{
args.AbortPipeline();
}
}
}
Then in your httpRequestBegin configuration:
<processor type="My.SystemWebRoutingResolver, My.Classes" />
You might want to have a look at Sitecore Web Api
It's pretty much the same you are building.
Another option, which I've used to good effect, is to use the content tree, the "star" item, and a sublayout/layout combination dedicated to this purpose:
[siteroot]/API/*/*/*/*/*/*/*/*/*
The above path allows you to have anywhere between 1 and 9 segments - if you need more than that, you probably need to rethink your process, IMO. This also retains all of the Sitecore context. Sitecore, when unable to find an item in a folder, attempts to look for the catch-all star item and if present, it renders that item instead of returning a 404.
There are a few ways to go about doing the restful methods and the sublayout (or sublayouts if you want to segregate them by depth to simplify parsing).
You can choose to follow the general "standard" and use GET, PUT, and POST calls to interact with these items, but then you can't use Sitecore Caching without custom backend caching code). Alternately, you can split your API into three different trees:
[siteroot]/API/GET/*/*/*/*/*/*/*/*/*
[siteroot]/API/PUT/*/*/*/*/*/*/*/*/*
[siteroot]/API/POST/*/*/*/*/*/*/*/*/*
This allows caching the GET requests (since GET requests should only retrieve data, not update it). Be sure to use the proper caching scheme, essentially this should cache based on every permutation of the data, user, etc., if you intend to use this in any of those contexts.
If you are going to create multiple sublayouts, I recommend creating a base class that handles general methods for GET, PUT, and POST, and then use those classes as the base for your sublayouts.
In your sublayouts, you simply get the Request object, get the path (and query if you're using queries), split it, and perform your switch case logic just as you would with standard routing. For PUT, use Response.ReadBinary(). For POST use the Request.Form object to get all of the form elements and iterate through them to process the information provided (it may be easiest to put all of your form data into a single JSON object, encapsulated as a string (so .NET sees it as a string and therefore one single property) and then you only have one element in the post to deserialize depending on the POST path the user specified.
Complicated? Yes. Works? Yes. Recommended? Well... if you're in a shared environment (multiple sites) and you don't want this processing happening for EVERY site in the pipeline processor, then this solution works. If you have access to using MVC with Sitecore or have no issues altering the pipeline processor, then that is likely more efficient.
One benefit to the content based method is that the context lifecycle is exactly the same as a standard Sitecore page (logins, etc.), so you've got all the same controls as any other item would provide at that point in the lifecycle. The negative to this is that you have to deal with the entire page lifecycle load before it gets to your code... the pipeline processor can skip a lot of Sitecore's process and just get the data you need directly, making it faster.
you need to have a Pipeline initializer for Routing:
It will be like :
public class Initializer
{
public void Process(PipelineArgs args)
{
RouteCollection route = RouteTable.Routes;
route.MapHttpRoute("DefaultApi", "api/{controller}/{action}/{id}",
new { id = RouteParameter.Optional });
}
}
On config file you will have :
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<pipelines>
<initialize>
<processor type="_YourNameSpace.Initializer,_YourAssembly" />
</initialize>
</pipelines>
</sitecore>
</configuration>
Happy coding