Setting ETW event level in service fabric tracing - trace

When you create a service fabric application project using visual studio, you get an implementation of EventSource (called ServiceEventSource). For example, here is one of the method implementation:
private const int ServiceRequestStopEventId = 6;
[Event(ServiceRequestStopEventId, Level = EventLevel.Informational, Message = "Service request '{0}' finished", Keywords = Keywords.Requests)]
public void ServiceRequestStop(string requestTypeName)
{
WriteEvent(ServiceRequestStopEventId, requestTypeName);
}
As you can see, this method has Event attribute which has Level argument set.
Where would I set that Level argument value?
I am thinking that setting this Level's argument value will show how much output gets generated. Am I correct?
Can I modify this Level argument value dynamically at run time and at will?

You can set Level only in the Event attribute.
The amount of output getting generated depends on consumers of logs. If there are no consumers or listeners, event will not be generated at any level. We can say that level depends on amount of output, but only if there are consumers of such event.
No, you can't modify the level dynamically. To do this, you could have two methods with the same signature and different levels.
You can find all the interesting information about ETW and its configuration here.

The code just indicates information about the ETW events it generates. Setting the level indicates in which category the event will be put. It doesn't configure whether the event is output. The logging tool determines whether it's logged or not. And you can usually change that level in the logging tool at run time.
Some useful links:
Configure WAD:
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostics-how-to-setup-wad/
Use Elastic Search
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostic-how-to-use-elasticsearch/
Use OMS to analyse the events.
https://azure.microsoft.com/en-us/documentation/articles/log-analytics-service-fabric/
Use Service Profiler (Actors)
https://www.azureserviceprofiler.com/

Related

How do I Refresh AnyLogic ModelDatabase from code?

I have linked many parameters in my model to values in the internal ModelDatabase. This works great.
Now I want to allow the user to import a new "Input File" for a specific scenario. I have added a fileChooser element on the Simulation Experiment screen and in the onUpload action I use the ModelDatabase.importFromExternalDB to upload the relevant sheet to the relevant table.
However, this does not seem to work. Initially I thought that the update simply did not happen but when I stop the model in the AnyLogic IDE and start it again in the IDE, the new values are used.
It appears as if the update only happens on startup/close and that the database is "static" while running. I did set the auto-commit parameter to true on the importFromExternalDB function, but this made no difference.
Is there a function I can call to force a "refresh" of the internal database?
You probably load the data at runtime using the default code such as selectFrom("some String with query")
However, these always load from the cached dbase to speed things up.
Instead, you must force AnyLogic to load from the live dbase using an additional argument. In this example, it would be selectFrom(false, "some String query")
This is also documented in the help:
So go through all queries in your model and add the appropriate boolean arg to force-load from the non-cached dbase.

Axon Framework: Handle only events published by the same JVM instance?

Hi Axon Framework community,
I'd like to have your opinion on how to solve the following problem properly.
My Axon Test Setup
Two instances of the same Spring Boot application (using axon-spring-boot-starter 4.4 without Axon Server)
Every instance publishes the same events on a regular interval
Both instances are connected to the same EventSource (single SQL Server instance using JpaEventStorageEngine)
Every instance is configured to use TrackingEventProcessors
Every instances has the same event handlers registered
What I want to achieve
I'd like that events published by one instance are only handled by the very same instance
If instance1 publishes eventX then only instance1 should handle eventX
What I've tried so far
I can achieve the above scenario using SubscribingEventProcessor. Unfortunately this is not an option in my case, since we'd like to have the option to replay events for rebuilding / adding new query models.
I could assign the event handlers of every instance to differed processing groups. Unfortunately this didn't worked. Maybe because every TrackingEventProcessors instance processes the same EventStream ? - not so sure about this though.
I could implement a MessageHandlerInterceptor which only proceeds in case the event origin is from the same instance. This is what I implemented so far and which works properly:
MessageHandlerInterceptor
class StackEventInterceptor(private val stackProperties: StackProperties) : MessageHandlerInterceptor<EventMessage<*>> {
override fun handle(unitOfWork: UnitOfWork<out EventMessage<*>>?, interceptorChain: InterceptorChain?): Any? {
val stackId = (unitOfWork?.message?.payload as SomeEvent).stackId
if(stackId == stackProperties.id){
interceptorChain?.proceed()
}
return null
}
}
#Configuration
class AxonConfiguration {
#Autowired
fun configure(eventProcessingConfigurer: EventProcessingConfigurer, stackProperties: StackProperties) {
val processingGroup = "processing-group-stack-${stackProperties.id}"
eventProcessingConfigurer.byDefaultAssignTo(processingGroup)
eventProcessingConfigurer.registerHandlerInterceptor(processingGroup) { StackEventInterceptor(stackProperties) }
}
}
Is there a better solution ?
I have the impression that my current solution is properly not the best one, since ideally I'd like that only the event handlers which belongs to a certain instance are triggered by the TrackingEventProcessor instances.
How would you solve that ?
Interesting scenario you're having here #thowimmer.
My first hunch would be to say "use the SubscribingEventProcessor instead".
However, you pointed out that that's not an option in your setup.
I'd argue it's very valuable for others who're in the same scenario to know why that's not an option. So, maybe you can elaborate on that (to be honest, I am curious about that too).
Now for your problem case to ensure events are only handled within the same JVM.
Adding the origin to the events is definitely a step you can take, as this allows for a logical way to filter. "Does this event originate from my.origin()?" If not, you'd just ignore the event and be done with it, simple as that. There is another way to achieve this though, to which I'll come to in a bit.
The place to filter is however what you're looking for mostly I think. But first, I'd like to specify why you need to filter in the first place. As you've noticed, the TrackingEventProcessor (TEP) streams events from a so called StreamableMessageSource. The EventStore is an implementation of such a StreamableMessageSource. As you are storing all events in the same store, well, it'll just stream everything to your TEPs. As your events are part of a single Event Stream, you are required to filter them at some stage. Using a MessageHandlerInterceptor would work, you could even go and write a HandlerEnhacnerDefinition allowing you to add additional behaviour to your Event Handling functions. However you put it though, with the current setup, filtering needs to be done somewhere. The MessageHandlerInterceptor is arguably the simplest place to do this at.
However, there is a different way of dealing with this. Why not segregate your Event Store, into two distinct instances for both applications? Apparently they do not have the need to read from one another, so why share the same Event Store at all? Without knowing further background of your domain, I'd guess you are essentially dealing with applications residing in distinct bounded contexts. Very shortly put, there is zero interest to share everything with both applications/contexts, you just share specifics portions of your domain language very consciously with one another.
Note that support for multiple contexts, using a single communication hub in the middle, is exactly what Axon Server can achieve for you. I am not here to say you cant configure this yourself though, I have done this in the past. But leaving that work to somebody or something else, freeing you from the need to configure infrastructure, that would be a massive timesaver.
Hope this helps you set the context a little of my thoughts on the matter #thowimmer.
Sumup:
Using the same EventStore for both instances is probably no an ideal setup in case we want to use the capabilities of the TrackingEventProcessor.
Options to solve it:
Dedicated (not mirrored) DB instance for each application instance.
Using multiple contexts using AxonServer.
If we decide to solve the problem on application level filtering using MessageHandlerInterceptor is the most simplest solution.
Thanks #Steven for exchanging ideas.
EDIT:
Solution on application level using CorrelationDataProvider & MessageHandlerInterceptor by filtering out events not originated in same process.
AxonConfiguration.kt
const val METADATA_KEY_PROCESS_ID = "pid"
const val PROCESSING_GROUP_PREFIX = "processing-group-pid"
#Configuration
class AxonConfiguration {
#Bean
fun processIdCorrelationDataProvider() = ProcessIdCorrelationDataProvider()
#Autowired
fun configureProcessIdEventHandlerInterceptor(eventProcessingConfigurer: EventProcessingConfigurer) {
val processingGroup = "$PROCESSING_GROUP_PREFIX-${ApplicationPid()}"
eventProcessingConfigurer.byDefaultAssignTo(processingGroup)
eventProcessingConfigurer.registerHandlerInterceptor(processingGroup) { ProcessIdEventHandlerInterceptor() }
}
}
class ProcessIdCorrelationDataProvider() : CorrelationDataProvider {
override fun correlationDataFor(message: Message<*>?): MutableMap<String, *> {
return mutableMapOf(METADATA_KEY_PROCESS_ID to ApplicationPid().toString())
}
}
class ProcessIdEventHandlerInterceptor : MessageHandlerInterceptor<EventMessage<*>> {
override fun handle(unitOfWork: UnitOfWork<out EventMessage<*>>?, interceptorChain: InterceptorChain?) {
val currentPid = ApplicationPid().toString()
val originPid = unitOfWork?.message?.metaData?.get(METADATA_KEY_PROCESS_ID)
if(currentPid == originPid){
interceptorChain?.proceed()
}
}
}
See full demo project on GitHub

Dynamically adjusting arrival rate in source

the basic idea behind the modeling issue is a breakdown of a production machine.
I would like to model this by setting the arrival rate (simply arrivals per second) to zero (Source.rate = 0). After the machine is repaired, the arrival rate is set to its actual value again (e.g., Source.rate = 5). While the first command does the job, the second does not seem to have any effect, i.e. new agents are not created.
The segment of the model is rather simple: Source --> Select Output (decision about breakdown) --> true: go on in production; false: delay (repair machine) --> go on in production.
Source.rate = 0 is called at the out port (false) of "breakdown" and Source.rate = 5 at the out port of "repair".
https://i.stack.imgur.com/hqGoI.png
Of cause, this issue might be modeled differently (e.g., using hold with disabled "forced pushing"), however, it is not clear for me why my approach does not work.
Thanks in advance!
Instead of using source.rate=5; use source.set_rate(5);
To expand on Felipe's answer with an explanation:
Instead of using source.rate=5; use source.set_rate(5);
rate is effectively a Parameter (in the AnyLogic sense) of the Source block. (All AnyLogic's Process Modeling blocks are actually themselves Agents developed by AnyLogic, and thus with Parameters, Variables, etc.)
You can set an AnyLogic Parameter directly (via just assigning a value as you did), but they also all have a set_<parameter name> method (function) which should really always be used instead because this triggers any internal on-change logic for this Parameter. It is only this triggered logic (internal to the Source block) which causes the Source to 're-evaluate' the rate properly.
(You can use on-change logic for Parameters in your own models, and need to do so when altering a parameter requires some 'adjustments' to the rest of the model; i.e., in situations where the change doesn't 'just work' due to other bits of the model reading the new value after the change point.)
I don't know why your model doesn't work (maybe more details of your model is needed), but a simple solution which I tested and worked, is as below:
You can set the source's "Type of arrival" to "calls of inject() function", add an event to your model and set its "Trigger type" to "Rate" and set its rate value to 5. Then in action code of the event use below code:
if(yourCondition)
{
source.inject(1);
}
I hope it helps you.

hunchentoot session- v. thread-localized values (ccl)

I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.

API function to add an Action to an Event or Schedule?

I need to add an Action to a Schedule object that is being created through the API. There are documented interfaces to set almost all the options except the Action. How are Actions attached to these Objects?
When I attempt to programmatically add a new event, read from a separate configuration file, to a Schedule object I get errors stating that the Schedule has already been initialized and that I must construct a new object and add its configuration manually. I can do most of that using the available Schedule API. I can set up everything about the Schedule except the Action code.
The Schedule is used in a Process Model. Looking at the model in the Java editor, I see the code I'm trying to replicate via the API in a function that looks like this:
#Override
#AnyLogicInternalCodegenAPI
public void executeActionOf( EventTimeout _e ) {
if ( _e == _fuelDeliverySchedule_Action_xjal ) {
Schedule<Integer> self = this.fuelDeliverySchedule;
Integer value = fuelDeliverySchedule.getValue();
logger.info("{} received {} pounds of fuel", this.getName(), this.fuelDeliverySchedule.getValue());
this.fuelAvailablePounds += fuelDeliverySchedule.getValue();
;
_fuelDeliverySchedule_Action_xjal.restartTo( fuelDeliverySchedule.getTimeOfNextValue() );
return;
}
super.executeActionOf( _e );
}
Maybe I can use something like this to create my own action function, but I'm not sure how to make the Scheduled event use it.
Thanks,
Thom
[Edited (expanded/rewrote) 03.11.2014 after more user detail on the context.]
You clarified the context with
When I attempt to programatically add "a thing that happens", read
from a separate configuration file, to a Schedule object I get errors
stating that the Schedule has already been initialized and that I must
construct a new object and add its configuration manually. I can do
most of that using the available Schedule API. I can set up everything
about the Schedule except the Action code.
(You might want to edit that into the question... In general, it's always good to explain the context for why you're trying to do the thing.)
I think I understand now. I presume that your config file contains scheduling details and, when you say you were trying to "add a thing that happens" (which errored), you meant that you were trying to change the scheduling 'pattern' in the Schedule. So your problem is that, since you couldn't adjust a pre-existing schedule, you had to instantiate (create) your own programmatically, but the Schedule API doesn't allow you to set the action code (as seen on the GUI schedule element).
This is a fairly involved solution so bear with me. I give a brief 'tl;dr'
summary before diving into the detail.
Summary
You can't programmatically code an AnyLogic action (for any element) because that would amount to
dynamically creating a Java class. Solving your problem requires recognising
that the schedule GUI element creates both a Schedule instance and a
timeout event (EventTimeout) instance to trigger the action. You can therefore create these two elements explicitly yourself (the former dynamically). The trick is to reset the timeout event when you replace the Schedule instance (to trigger at the next 'flip' point of the new Schedule).
[Actually, from your wording, I suspect that the action is always the same but, for generality, I show how you could handle it if your config file details might want to change the nature of the action as well as those of the scheduling pattern.]
Detail
The issue is that the GUI element (confusingly) isn't just a Schedule instance
in terms of the code it generates. There is one (with the same name as that of
the GUI element), which just contains the schedule 'pattern' and, as in the API,
has methods to determine when the next on/off period (for an on/off schedule) occurs. (So
it is kind of fancy calendar functionality.) But AnyLogic also generates a
timeout event to actually perform the action; if you look further in the code
generated, you'll see stuff similar to the below (assuming your GUI schedule is called
fuelSchedule, with Java comments added by
me):
// Definition of the timeout event
#AnyLogicInternalCodegenAPI
public EventTimeout _fuelSchedule_Action_xjal = new EventTimeout(this);
// First occurrence time of the event as the next schedule on/off change
// time
#Override
#AnyLogicInternalCodegenAPI
public double getFirstOccurrenceTime( EventTimeout _e ) {
if ( _e == _fuelSchedule_Action_xjal ) return fuelSchedule.getTimeOfValue() == time() ? time() : fuelSchedule.getTimeOfNextValue();
return super.getFirstOccurrenceTime( _e );
}
// After your user action code, the event is rescheduled for the next
// schedule on/off change time
_fuelSchedule_Action_xjal.restartTo( fuelSchedule.getTimeOfNextValue() );
i.e., this creates an event which triggers each time the schedule 'flips', and performs the action specified in the GUI schedule element.
So there is no action to change on the Schedule instance; it's actually related to the EventTimeout instance. However, you can't programmatically change it there (or create a new one dynamically) for the same reason that you can't change the action of any AnyLogic element:
this would effectively be programmatically
creating a Java class definition, which isn't possible without very specialised
Java code. (You can create Java source code in a string and
dynamically run a Java compiler on it to generate a class. However, this is very
'advanced' Java, has lots of potential pitfalls, and I would definitely not
recommend going that route. You would also have to be creating source for a user subclass
of EventTimeout, since you don't know the correct source code for AnyLogic's proprietary EventTimeout class, and this might change per release in any case.)
But you shouldn't need to: there should be a strict set of possible actions that your config file can contain. (They can't be arbitrary Java code snippets, since they have to 'fit in' with the simulation.) So you can do what you want by programmatically creating the Schedule but with a GUI-created timeout event that you adjust accordingly(assuming an off/on schedule here and that there is
only one schedule active at once; obviously tweak this skeleton to your needs
and I haven't completely tested this in AnyLogic):
1. Have an AnyLogic variable activeAction which specifies the current active
action. (I take this as an int here for simplicity, but it's better to use a
Java enum which is the same as an AnyLogic 7 Option List, and can just be
created in raw Java in AnyLogic 6.)
2. Create a variable in the GUI, say called fuelSchedule, of type Schedule but with initial value null. Create a separate timeout event, say called fuelScheduleTrigger, in User Control mode, with action as:
// Perform the appropriate action (dependent on activeAction)
doAppropriateScheduleAction();
// Set the event to retrigger at the next schedule on/off switch time
fuelScheduleTrigger.restartTo(fuelSchedule.getTimeOfNextValue());
(Being in User Control mode, this event isn't yet triggered to initially fire, which is what we want.)
3. Code a set of functions for each of the different action alternatives; let's say
there are only 2 (fuelAction1 and fuelAction2) here as an example. Code
doAppropriateScheduleAction as:
if (activeAction == 1) {
fuelAction1();
}
else if (activeAction == 2) {
fuelAction2();
}
4. In your code which reads the config file and gets updated schedule info.
(presumably run from a cyclic timeout event or similar), have this replace
fuelSchedule with a new instance with the revised schedule pattern (as you've
been doing), set activeAction appropriately, and then reset the timeout event to
the new fuelSchedule.getTimeOfValue() time:
[set up fuelSchedule and activeAction]
// Reset schedule action to match revised schedule
fuelScheduleTrigger.restartTo(fuelSchedule.getTimeOfNextValue());
I think this works OK in the edge case when the new Schedule had its next 'flip' at the time
you set it up. (If you restart an event to the current time, I think it schedules an event OK at the current time which will occur next if there are no other events also scheduled for the current time; actually, it will definitely occur next if you are using a LIFO simultaneous-time-scheduling regime---see my blog post.)
Alternative & AnyLogic Enhancement
An alternative is to create a 'full' schedule in the GUI with action as earlier. Your config file reading code can replace the underlying Schedule instance and then reset the internal AnyLogic-generated timeout event. However, this is less preferable because you are relying on an internally-named AnyLogic event (which might also change in future AnyLogic releases, breaking your code).
AnyLogic could help this situation by adding a method to the Schedule API that gets the related timeout event; e.g., getActionTriggeringEventTimeout(). Then you would be able to 'properly' restart it and the Schedule API would make much clearer that the Schedule was always associated with an EventTimeout that did the triggering for the action.
Of course, AnyLogic could also go further by changing Schedule to allow scheduling details to be changed dynamically (and internally handling the required updates to the timeout event if it continued to be designed like that), but that's a lot more work and there may be deeper technical reasons why they wanted the schedule pattern to be fixed once the Schedule is initialised.
Any AnyLogic support staff reading?