Default values of OperationRetrySettings for default communication proxy - azure-service-fabric

When I create a proxy to my service like this:
ServiceProxy.Create<IMyService>(new Uri("fabric:/MyApplication/MyService"));
What values will OperationRetrySettings instance has?
Will there be any logic in default TryHandleException implementation?
Will any logic related to retrying in case of exceptions even be involved in case of mentioned code?
Is there a way to adjust retrying in case of exception logic, change values of default OperationRetrySettings?
I use FabricTransportServiceRemotingListener.

Based on my observations, which, definitely, do not cover all the questions completely because there is not so much information:
Some defaults. At least DefaultMaxRetryCount is 10 and 2 seconds on each backoff interval. I found these values by instantiating FabricTransportServiceRemotingClientFactory and passing custom IExceptionHandler. Probably, this property even has no meaning if you are using default ServiceProxyFactory or ServiceProxy.
Looks like yes. I didn't find the exact default IExceptionHandler which is used in proxy and factory, but noticed numerous retries in case if I throw an TimeoutException exception in a service. Probably, logic of ActorRemotingExceptionHandler and ServiceRemotingExceptionHandler is used by default.
Yes.
Yes. You need to instantiate ServiceProxyFactory or ActorProxyFactory, passing there IServiceRemotingClientFactory implementation (for example, FabricTransportServiceRemotingClientFactory), specifying OperationRetrySettings and IExceptionHandler, passing as an exception handler your own implementation.

The ServiceProxy.Create uses a default ServiceProxyFactory.
It is created with the default OperationRetrySettings which are documented here:
https://msdn.microsoft.com/en-us/library/mt711955.aspx
The default ServiceProxyFactory also uses the default service removing client factory; FabricTransportServiceRemotingClientFactory.
https://msdn.microsoft.com/en-us/library/microsoft.servicefabric.services.remoting.fabrictransport.client.fabrictransportserviceremotingclientfactory.fabrictransportserviceremotingclientfactory.aspx
Following the remarks on this documentation, you can see that it uses ServiceRemotingExeptionHandler which is described here:
https://msdn.microsoft.com/en-us/library/microsoft.servicefabric.services.remoting.client.serviceremotingexceptionhandler.aspx

Related

Pre/Post-Handler Hook for Micronaut

I was wondering if there is some way to provide two methods to Micronaut which are guaranteed to run before and after a request was passed to the handler.
In my case, this would be to initialize some thread-local data.
I know this would also be possible to put in the handler itself but putting the same lines of code in every handler isn't really the greatest solution.
Use a filter - see the docs at https://docs.micronaut.io/latest/guide/#filters

Create container command hangs, no error logged

I'm running Drools 7.7.0.final KIE server on tomcat. I am seeing this behavior when launching a container via RESTful call to the KIE server....
The container is never created, and the RESTful call hangs indefinitely. When I query the server I see that the container is stuck in 'status="Creating"'.
This doesn't always happen. It seems to be dependent on the rules. For the most part, my LHS (when clause) are of the form..
myObject( (field1 != null) && field2 ) ... etc.
....where field2 is a boolean.
The difficulty seems to come in when I attempt something complicated like ...
myObject ( JsonMappper.truth(propertiesString, "field2") )
...where propertiesString is a string containing JSON, and JsonMapper.truth is a static method that returns a boolean based on the decoded value of field2.
The odd thing is that I never receive a compilation error, and the behavior changes in unpredictable when I remove/add various rules. Sometimes the container will be created even when multiple instances of rules with JsonMapper.truth exist in the rules file. There seems to some subtle interaction between the rules.
My questions are:
1) Is there some danger associated with using a custom java function like this in the when clause?
2) Is there a way to determine why the container creation is hanging? I am not finding any useful logs. Nothing useful seems to be written to the usual tomcat logs.
3) Has anyone seen this behavior (container creation hanging)?
I had the similar issue. But I thought that it is related to Enums usage. Switching the version to '7.9.0.Final' fixed everything.

Proper way to communicate with socket

Is there any design pattern or something else for the network communication using Socket.
I mean what i always do is :
I receive a message from my client or my server
I extract the type of this message (f.e : LOGIN or LOGOUT or
CHECK_TICKET etc ...)
And i test this type in a switch case statement
Then execute the suitable method for this type
this way is a little bit borring when u have a lot of type of message.
Each time i have to add a type, i have to add it in the switch case.
Plus, it take more machine operations when you have hundred or thousands type of message in your protocol (due to the switch case).
Thanks.
You could use a loop over a set of handler classes (i.e. one for each type of message supported). This is essentially the composite pattern. The Component and each Composite then become independently testable. Once written Component need never change again and the support for a new message becomes isolated to a single new class (or perhaps lambda or function pointer depending on language). Also you can add/remove/reorder Composites at runtime to the Component, if that was something you wanted from your design (alternatively if you wanted to prevent this, depending on your language you could use variadic templates). Also you could look at Chain of Responsibility.
However, if you thought that adding a case to a switch is a bit laborious, I suspect that writing a new class would be too.
P.S. I don't see a good way of avoiding steps 1 and 2.

hunchentoot session- v. thread-localized values (ccl)

I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.

Setting ETW event level in service fabric tracing

When you create a service fabric application project using visual studio, you get an implementation of EventSource (called ServiceEventSource). For example, here is one of the method implementation:
private const int ServiceRequestStopEventId = 6;
[Event(ServiceRequestStopEventId, Level = EventLevel.Informational, Message = "Service request '{0}' finished", Keywords = Keywords.Requests)]
public void ServiceRequestStop(string requestTypeName)
{
WriteEvent(ServiceRequestStopEventId, requestTypeName);
}
As you can see, this method has Event attribute which has Level argument set.
Where would I set that Level argument value?
I am thinking that setting this Level's argument value will show how much output gets generated. Am I correct?
Can I modify this Level argument value dynamically at run time and at will?
You can set Level only in the Event attribute.
The amount of output getting generated depends on consumers of logs. If there are no consumers or listeners, event will not be generated at any level. We can say that level depends on amount of output, but only if there are consumers of such event.
No, you can't modify the level dynamically. To do this, you could have two methods with the same signature and different levels.
You can find all the interesting information about ETW and its configuration here.
The code just indicates information about the ETW events it generates. Setting the level indicates in which category the event will be put. It doesn't configure whether the event is output. The logging tool determines whether it's logged or not. And you can usually change that level in the logging tool at run time.
Some useful links:
Configure WAD:
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostics-how-to-setup-wad/
Use Elastic Search
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostic-how-to-use-elasticsearch/
Use OMS to analyse the events.
https://azure.microsoft.com/en-us/documentation/articles/log-analytics-service-fabric/
Use Service Profiler (Actors)
https://www.azureserviceprofiler.com/