How should I relate Service Fabric Actors together? - azure-service-fabric

I have a number of Actors which represent a single physical object (a IoT device).
On reviewing our code base we sometimes pass in an existing ActorReference into other Actors and other times we create a new proxy object.
I would have assumed that passing in a existing Actor Reference is more performant but I'm worried about side-effects so creating a new proxy object would seem lower risk.
What are the pro's and con's of each approach and what should I consider when making a decision on which approach to use ?

From researching a decompiled source code it seems to me using ActorReference.Bind and ActorProxy.Create is nearly equal:
public object Bind(Type actorInterfaceType)
{
return ActorProxy.DefaultProxyFactory.CreateActorProxy(actorInterfaceType, this.ServiceUri, this.ActorId, this.ListenerName);
}
public static TActorInterface Create<TActorInterface>(ActorId actorId, string applicationName = null, string serviceName = null, string listenerName = null) where TActorInterface : IActor
{
return ActorProxy.DefaultProxyFactory.CreateActorProxy<TActorInterface>(actorId, applicationName, serviceName, listenerName);
}
So there are no differences neither in reliability nor in performance.
ActorReference supports serialization so it seems more suitable for passing between actors.

Passing ActorReference is better and recommended approach as it allows you to create the ActorProxy with a custom ActorProxyFactory on the receiving side. If an actor is passed as an interface the de-serialization process uses the default actor proxy factory to bind the reference to the proxy and provide the proxy object to the receiving method.

Related

Should service layer accept a DTO or a custom request object from the controller?

As the title suggests what is the best practice when designing service layers?. I do understand service layer should always return a DTO so that domain (entity) objects are preserved within the service layer. But what should be the input for the service layer from the controllers?
I put forward three of my own suggestions below:
Method 1:
In this method the domain object (Item) is preserved within the service layer.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(IntemDTO dto)
{
// service layer returns a DTO object and accepts a DTO object
return service.createItem(dto);
}
}
Method 2:
This is where the service layer receives a custom request object. I have seen this pattern extensively in AWS Java SDK and also Google Cloud Java API
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a custom request object
return service.createItem(request);
}
}
Method 3:
Service layer accepts a DTO and returns a domain object. I am not a fan of this method. But its been used extensively used at my workplace.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a DTO object
Item item = service.createItem(request);
return ItemDTO.fromEntity(item);
}
}
If all 3 of the above methods are incorrect or not the best way to do it, please advise me on the best practice.
Conceptually speaking, you want to be able to reuse the service/application layer across presentation layers and through different access ports (e.g. console app talking to your app through a web socket). Furthermore, you do not want every single domain change to bubble up into the layers above the application layer.
The controller conceptually belongs to the presentation layer. Therefore, you wouldn't want the application layer to be coupled upon a contract defined in the same conceptual layer the controller is defined in. You also wouldn't want the controller to depend upon the domain or it may have to change when the domain changes.
You want a solution where the application layer method contracts (parameters & return type) are expressed in any Java native types or types defined in the service layer boundary.
If we take an IDDD sample from Vaughn Vernon, we can see that his application service method contracts are defined in Java native types. His application service command methods also do not yield any result given he used CQRS, but we can see query methods do return a DTO defined in the application/service layer package.
In the above listed 3 methods which ones are correct/wrong?
Both, #1 and #2 are very similar and could be right from a dependency standpoint, as long as ItemDto and CreateItemRequest are defined in the application layer package, but I would favor #2 since the input data type is named against the use case rather than simply the kind of entity it deals with: entity-naming-focus would fit better with CRUD and because of that you might find it difficult to find good names for input data types of other use case methods operating on the same kind of entity. #2 also have been popularized through CQRS (where commands are usually sent to a command bus), but is not exclusive to CQRS. Vaughn Vernon also uses this approach in the IDDD samples. Please note that what you call request is usually called command.
However, #3 would not be ideal given it couples the controller (presentation layer) with the domain.
For example, some methods receive 4 or 5 args. According to Eric Evans in Clean Code, such methods must be avoided.
That's a good guideline to follow and I'm not saying the samples couldn't be improved, but keep in mind that in DDD, the focus is put on naming things according to the Ubiquitous Language (UL) and following it as closely as possible. Therefore, forcing new concepts into the design just for the sake of grouping arguments together could potentially be harmful. Ironically, the process of attempting to do so may still offer some good insights and allow to discover overlooked & useful domain concepts that could enrich the UL.
PS: Robert C. Martin has written Clean Code, not Eric Evans which is famous for the blue book.
I'm from C# background but the concept remains same here.
In a situation like this, where we have to pass the parameters/state from application layer to service layer and, then return result from service layer, I would tend to follow separation-of-concerns. The service layer does not need to know about the Request parameter of you application layer/ controller. Similarly, what you return from service layer should not be coupled with what you return from your controller. These are different layers, different requirements, separate concerns. We should avoid tight coupling.
For the above example, I would do something like this:
class Controller
{
#Autowired
private ItemService service;
public ItemResponse createItem(CreateItemRequest request)
{
var creatItemDto = GetDTo(request);
var itemDto = service.createItem(createItemDto);
return GetItemResponse(itemDto);
}
}
This may feel like more work since now you need to write addional code to convert the different objects. However, this gives you a great flexiblity and makes the code much easier to maintain. For example: CreateItemDto may have additional/ computational fields as compared to CreateItemRequest. In such cases, you do not need to expose those fields in your Request object. You only expose your Data Contract to the client and nothing more. Similarly, you only return the relevant fields to the client as against what you return from service layer.
If you want to avoid manual mapping between Dto and Request objects C# has libaries like AutoMapper. In java world, I'm sure there should be an equivalent. May be ModelMapper can help.

Service Fabric, determine if specific actor exists

We are using Azure Service Fabric and are using actors to model specific devices, using the id of the device as the ActorId. Service Fabric will instantiate a new actor instance when we request an actor for a given id if it is not already instantiated, but I cannot seem to find an api that allows me to query if a specific device id already has an instantiated actor.
I understand that there might be some distributed/timing issues in obtaining the point-in-time truth but for our specific purpose, we do not need a hard realtime answer to this but can settle for a best guess. We would just like to, in theory, contact the current primary for the specific partition resolved by the ActorId and get back whether or not the device has an instantiated actor.
Ideally it is a fast/performant call, essentially faster than e.g. instantiating the actor and calling a method to understand if it has been initialized correctly and is not just an "empty" actor.
You can use the ActorServiceProxy to iterate through the information for a specific partition but that does not seem to be a very performant way of obtaining the information.
Anyone with insights into this?
The only official way you can check if the actor has been activated in any Service Partition previously is using the ActorServiceProxy query, like described here:
IActorService actorServiceProxy = ActorServiceProxy.Create(
new Uri("fabric:/MyApp/MyService"), partitionKey);
ContinuationToken continuationToken = null;
do
{
PagedResult<ActorInformation> page = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
var actor = page.Items.FirstOrDefault(x => x.ActorId == idToFind);
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
By the nature of SF Actors, they are virtual, that means they always exist, even though you didn't activated then previously, so it make a bit harder to do this check.
As you said, it is not performant to query all actors, so, the other workarounds you could try is:
Store the IDs in a Reliable Dictionary elsewhere, every time an Actor is activated you raise an event and insert the ActorIDs in the Dictionary if not there yet.
You can use the OnActivateAsync() actor event to notify it's creation, or
You can use the custom actor factory in the ActorService to register actor activation
You can store the dictionary in another actor, or another StatefulService
Create a property in the actor that is set by the actor itself when it is activated.
The OnActivateAsync() check if this property has been set before
If not set yet, you set a new value and store in a variable (a non persisted value) to say the actor is new
Whenever you interact with actor you set this to indicate it is not new anymore
The next activation, the property will be already set, and nothing should happen.
Create a custom IActorStateProvider to do the same as mentioned in the option 2, instead of handle it in the actor it will handle a level underneath it. Honestly I think it is a bit of work, would only be handy if you have to do the same for many actor types, the option 1 and 2 would be much easier.
Do as Peter Bons Suggested, store the ActorID outside the ActorService, like in a DB, I would only suggest this option if you have to check this from outside the cluster.
.
The following snipped can help you if you want to manage these events outside the actor.
private static void Main()
{
try
{
ActorRuntime.RegisterActorAsync<NetCoreActorService>(
(context, actorType) => new ActorService(context, actorType,
new Func<ActorService, ActorId, ActorBase>((actorService, actorId) =>
{
RegisterActor(actorId);//The custom method to register the actor if new
return (ActorBase)Activator.CreateInstance(actorType.ImplementationType, actorService, actorId);
})
)).GetAwaiter().GetResult();
Thread.Sleep(Timeout.Infinite);
}
catch (Exception e)
{
ActorEventSource.Current.ActorHostInitializationFailed(e.ToString());
throw;
}
}
private static void RegisterActor(ActorId actorId)
{
//Here you will put the logic to register elsewhere the actor creation
}
Alternatively, you could create a stateful DeviceActorStatusActor which would be notified (called) by DeviceActor as soon as it's created. (Share the ActorId for correlation.)
Depending on your needs you can also register multiple Actors with the same status-tracking actor.
You'll have great performance and near real-time information.

Dynamic configs with Structuremap

Here's what I am trying to accomplish with Structuremap.
On each we request, database connection strings and web service urls used in our clients will vary based on some business logic. Currently, our sql and web service client implementations receive the configs in their constructors.
I wanted to use profiles, only to discover that it is not possible to use them per request.
In our team, we're having a debate over two solutions:
1- Pass a config factory into the registry that can resolve which configurations to use
when the container needs to instantiate something.
Problems I see is that we might have to use HttpContext.Items, as most of the app objects are not instantiated in structuremap and it seems hard to get the current request context from within the factory.
2- Instantiate containers for every different configurations and decide which container to use depending on the business logic.
Problems I see is the load time, the memory consumption and maybe the lifecycles of objects. So, I don't seem to find any real problem here, it just feels wrong to me to have multiple containers.
1- Do you see other problems?
2- Any better idea?
3- Which one would you choose?
Thank you
EDIT
and it seems hard to get the current request context from within the factory.
I don't mean HttpContext, I mean the request data. For this app, it is a wcf request object.
it seems hard to get the current request context from within the factory.
Not sure why it seems that way. Wouldnt the following do the trick?
ObjectFactory.Configure(config => {
config.For<HttpContextBase>()
.Use(() => { return new HttpContextWrapper(HttpContext.Current); });
config.For<Service>().Use<Service>();
});
var service = ObjectFactory.GetInstance<Service>();
public class ConfigurationFactory
{
public ConfigurationFactory(System.Web.HttpContextBase context)
{
}
}
public class Service
{
public Service(ConfigurationFactory Configuration)
{
}
}

Class Design: Demeter vs. Connection Lifetimes

Okay, so here's a problem I'm running into.
I have some classes in my application that have methods that require a database connection. I am torn between two different ways to design the classes, both of which are centered around dependency injection:
Provide a property for the connection that is set by the caller prior to method invocation. This has a few drawbacks.
Every method relying on the connection property has to validate that property to ensure that it isn't null, it's open and not involved in a transaction if that's going to muck up the operation.
If the connection property is unexpectedly closed, all the methods have to either (1.) throw an exception or (2.) coerce it open. Depending on the level of robustness you want, either case is appropriate. (Note that this is different from a connection that is passed to a method in that the reference to the connection exists for the lifetime of the object, not simply for the lifetime of the method invocation. Consequently, the volatility of the connection just seems higher to me.)
Providing a Connection property seems (to me, anyway) to scream out for a corresponding Transaction property. This creates additional overhead in the documentation, since you'd have to make it fairly obvious when the transaction was being used, and when it wasn't.
On the other hand, Microsoft seems to favor the whole set-and-invoke paradigm.
Require the connection to be passed as an argument to the method. This has a few advantages and disadvantages:
The parameter list is naturally larger. This is irksome to me, primarily at the point of call.
While a connection (and a transaction) must still be validated prior to use, the reference to it exists only for the duration of the method call.
The point of call is, however, quite clear. It's very obvious that you must provide the connection, and that the method won't be creating one behind your back automagically.
If a method doesn't require a transaction (say a method that only retrieves data from the database), no transaction is required. There's no lack of clarity due to the method signature.
If a method requires a transaction, it's very clear due to the method signature. Again, there's no lack of clarity.
Because the class does not expose a Connection or a Transaction property, there's no chance of callers trying to drill down through them to their properties and methods, thus enforcing the Law of Demeter.
I know, it's a lot. But on the one hand, there's the Microsoft Way: Provide properties, let the caller set the properties, and then invoke methods. That way, you don't have to create complex constructors or factory methods and the like. Also, avoid methods with lots of arguments.
Then, there's the simple fact that if I expose these two properties on my objects, they'll tend to encourage consumers to use them in nefarious ways. (Not that I'm responsible for that, but still.) But I just don't really want to write crappy code.
If you were in my shoes, what would you do?
Here is a third pattern to consider:
Create a class called ConnectionScope, which provides access to a connection
Any class at any time, can create a ConnectionScope
ConnectionScope has a property called Connection, which always returns a valid connection
Any (and every) ConnectionScope gives access to the same underlying connection object (within some scope, maybe within the same thread, or process)
You then are free to implement that Connection property however you want, and your classes don't have a property that needs to be set, nor is the connection a parameter, nor do they need to worry about opening or closing connections.
More details:
In C#, I'd recommend ConnectionScope implement IDisposable, that way your classes can write code like "using ( var scope = new ConnectionScope() )" and then ConnectionScope can free the connection (if appropriate) when it is destroyed
If you can limit yourself to one connection per thread (or process) then you can easily set the connection string in a [thread] static variable in ConnectionScope
You can then use reference counting to ensure that your single connection is re-used when its already open and connections are released when no one is using them
Updated: Here is some simplified sample code:
public class ConnectionScope : IDisposable
{
private static Connection m_Connection;
private static int m_ReferenceCount;
public Connection Connection
{
get
{
return m_Connection;
}
}
public ConnectionScope()
{
if ( m_Connection == null )
{
m_Connection = OpenConnection();
}
m_ReferenceCount++;
}
public void Dispose()
{
m_ReferenceCount--;
if ( m_ReferenceCount == 0 )
{
m_Connection.Dispose();
m_Connection = null;
}
}
}
Example code of how one (any) of your classes would use it:
using ( var scope = new ConnectionScope() )
{
scope.Connection.ExecuteCommand( ... )
}
I would prefer the latter method. It sounds like your classes use the database connection as a conduit to the persistence layer. Making the caller pass in the database connection makes it clear that this is the case. If the connection/transaction were represented as a property of the object, then things are not so clear and all of the ownership and lifetime issues come out. Better to avoid them from the start.

ServiceContainer, IoC, and disposable objects

I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.