There is the following code snippet provided here for creating IReliableDictionary inside of StatefulService subclass:
var myDictionary = await this.StateManager.
GetOrAddAsync<IReliableDictionary<string, long>>("myDictionary");
My question is about how to write similar code but for Actor. There is only following declaration inside of IActorStateManager which supports T Value as a 2-nd parameter:
Task<T> GetOrAddStateAsync<T>(string stateName, T value, CancellationToken cancellationToken = default(CancellationToken));
The issue is I can't find IReliableDictionary implementations available. How the correct snippet should looks like?
Because of the way an actor works (turn based access model, state replication/persistence, etc.) it doesn't really make sense to be using an IReliableDictionary in the actor state - you can use a normal Dictionary and you will get all the benefits you'd get from a stateful service using a reliable collection.
Related
I have a StatefulService with a method. The first argument of the method accepts an interface type that corresponds to one of my Actors. The Actor calls the service method using ServiceProxy, passing this in as the first argument. This compiles file. The signatures match.
When running however, I get an error about an unexpected type of IMyActorType not being known to the DataContractSerializer. I know what this message means. Does ServiceProxy not handle ActorReferences? I know ActorProxy works. I can pass one Actor to another Actor using ActorProxy.
Or is this maybe some problem in my configuration of the StatefulService? Something with my ServiceReplicaListener setup?
I have worked around this issue for now by changing the method signatures of my StatefulService methods to ActorReference. That serializes fine, and I can unpack it on the other side. I would much rather have the proper typing, however.
Service Proxy does not handle the Actor references like Actor Proxy does. Services stack is independent of the actor and does not have knowledge of the actor references. Instead of passing the actor interface, you can pass the actor reference (https://msdn.microsoft.com/en-us/library/azure/microsoft.servicefabric.actors.actorreference.get.aspx) and then bind (https://msdn.microsoft.com/en-us/library/azure/microsoft.servicefabric.actors.actorreference.bind.aspx) the actor reference to the actor interface type on the receiving side. You can cast the output of the binding method to the actor interface.
You need to pass a simple DataContract-decorated POCO and not an interface. DataContractSerializer won't work on an interface. The same rules that would apply for WCF also apply to Service Fabric.
I have the following class, which uses constructor injection:
public class Service : IService
{
public Service(IRepository repository, IProvider provider) { ... }
}
For most methods in this class, I simply create Moq mocks for IRepository and IProvider and construct the Service. However, there is one method in the class that calls several other methods in the same class. For testing this method, instead of testing all those methods together, I want to test that the method calls those methods correctly and processes their return values correctly.
The best way to do this is to mock Service. I've mocked concrete classes with Moq before without issue. I've even mocked concrete classes that require constructor arguments with Moq without issue. However, this is the first time I've needed to pass mocked arguments into the constructor for a mocked object. Naturally, I tried to do it this way:
var repository = new Mock<IRepository>();
var provider = new Mock<IProvider>();
var service = new Mock<Service>(repository.Object, provider.Object);
However, that does not work. Instead, I get the following error:
Castle.DynamicProxy.InvalidProxyConstructorArgumentsException : Can not instantiate proxy of class: My.Namespace.Service.
Could not find a constructor that would match given arguments:
Castle.Proxies.IRepository
Castle.Proxies.IProvider
This works fine if Service's constructor takes simple arguments like ints and strings, but not if it takes interfaces that I'm mocking. How do you do this?
Why are you mocking the service you are testing? If you are wishing to test the implementation of the Service class (whether that be calls to mocked objects or not), all you need are mocks for the two interfaces, not the test class.
Instead of:
var repository = new Mock<IRepository>();
var provider = new Mock<IProvider>();
var service = new Mock<Service>(repository.Object, provider.Object);
Shouldn't it be this instead?
var repository = new Mock<IRepository>();
var provider = new Mock<IProvider>();
var service = new Service(repository.Object, provider.Object);
I realize that it is possible to mock concrete objects in some frameworks, but what is your intended purpose? The idea behind mocking something is to remove the actual implementation so that it does not influence your test. But in your question, you have stated that you wish to know that certain classes are called on properly, and then you wish to validate the results of those actions. That is undoubtedly testing the implementation, and for that reason, I am having a hard time seeing the goals of mocking the concrete object.
I had a very similar problem when my equivalent of Service had an internal constructor, so it was not visible to Moq.
I added
[assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]
to my AssemblyInfo.cs file for the implementing project. Not sure if it is relevant, but I wanted to add a suggestion on the off chance that it helps you or someone else.
It must be old version issue, all is ok with latest version. Nick, Please check!
P.s.: I started bounty by misstake (I had wrong signature in my constructor).
In google io 2011, David Chandler mentioned that you can chain different request context by using append() method,but in practice, I don't know how to chain them up while they have different receiver,using to() and then fire()?
Please help.
There are two kinds of receivers: the ones bound to each method invocation (that you pass to the Request's to() method), and the context-level one (that you pass to the RequestContext's fire() method). The Request's fire(Receiver) method is a short-hand for to(receiver).fire(), i.e. it binds the Receiver to the method.
The method-level receivers depend on the method only, their generic parameterization depends on the method's return value (the generic parameterization of the Request or InstanceRequest), so whether you append() several RequestContexts together changes absolutely nothing.
The context-level receiver is always parameterized with Void. When you append() contexts together, they actually form a single context with several interfaces, so you only call fire() once, on any one of the appended contexts.
Now let's go back to the basics: without using append(), you can only batch together calls for methods that are declared on the context interface. If you have two distinct context interfaces you want to use, you have to make two fire(), i.e. two HTTP requests. The introduction of append() allows you to batch together calls for methods declared on any context interface: simply append a context to another one and the calls on both contexts will be batched together in the same HTTP request, triggered by a unique fire() on any one of the context being appended.
Now into the technical details: internally, a context is nothing more than a thin wrapper around a state object. When you edit() or create() a proxy, you add it to the internal state, and when you call a service method, the method name (actually, its obfuscated token) and the arguments are captured and pushed to the state as well. When you append() a context, you're only making it share its internal state with the one of the context you append it to. That way, when you call a service method on the appended context, its pushed on the exact same state as the one of the other context, and when you fire() any one of them, the state is serialized into a single HTTP request.
Note that, to append a context, its own internal state has to be empty, otherwise an exception will be raised, as the state would be thrown away to be replaced by the one of the other context.
In brief, and in practice:
FirstContext first = rf.first();
SomeProxy proxy = first.create(SomeProxy.class);
...
SecondContext second = first.append(rf.second());
OtherProxy other = second.create(OtherProxy.class);
other.setSome(proxy);
...
second.saveAndReturnSelf(other).to(new Receiver<OtherProxy>() {
...
});
...
first.fire();
Note that the line that creates and appends the second context could equally be written:
SecondContext second = rf.second();
first.append(second);
The append method returns its argument as a convenience, but it's really the same value you passed as the argument. This is only to allow writing the one-liner above, instead of being forced to use the two-liner.
I am building an ASP.NET 4.0 MVC 2 app with a generic repository based on this blog post.
I'm not sure how to deal with the lifetime of ObjectContext -- here is a typical method from my repository class:
public T GetSingle<T>(Func<T, bool> predicate) where T : class
{
using (MyDbEntities dbEntities = new MyDbEntities())
{
return dbEntities.CreateObjectSet<T>().Single(predicate);
}
}
MyDbEntities is the ObjectContext generated by Entity Framework 4.
Is it ok to call .CreateObjectSet() and create/dispose MyDbEntities per every HTTP request? If not, how can I preserve this object?
If another method returns an IEnumerable<MyObject> using similar code, will this cause undefined behavior if I try to perform CRUD operations outside the scope of that method?
Yes, it is ok to create a new object context on each request (and in turn a call to CreateObjectSet). In fact, it's preferred. And like any object that implements IDisposable, you should be a good citizen and dispose it (which you're code above is doing). Some people use IoC to control the lifetime of their object context scoped to the http request but either way, it's short lived.
For the second part of your question, I think you're asking if another method performs a CRUD operation with a different instance of the data context (let me know if I'm misinterpreting). If that's the case, you'll need to attach it to the new data context that will perform the actual database update. This is a fine thing to do. Also, acceptable would be the use the Unit of Work pattern as well.
I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.