Unreal GAS: When a GameplayAbility is considered as executed - unreal-engine4

In Unreal GameplayAbilitySystem, e.g. Cancel Abilities With Tag is documented with
Cancels any already-executing Ability [..]
and the source code says (in GameplayAbility.h)
Abilities with these tags are cancelled when this ability is executed
To know at design time, which abilities will be affected when setting those tags, it is required to be sure about:
When a GameplayAbility is considered as executed and when as executing?
According to the documentation linked above, the GA is executed with CallActivateAbility(). However, activation can be interrupted, if cooldown/costs conditions are not fulfilled (e.g. "not enough mana") in CommitAbility().
CommitAbility() checks if the ability can be activated via CommitCheck() (cooldown/costs) and only if this succeeds, CommitExecute() is called (which applies cooldown and costs).
Before CommitAbility(), PreActivate() is called, which
1) sets (for instanced GameplayAbilities):
// ...
bIsActive = true;
bIsBlockingOtherAbilities = true;
bIsCancelable = true;
// ...
2) and blocks/cancels other GameplayAbilities with given tags using UAbilitySystemComponent::ApplyAbilityBlockAndCancelTags
Detailed questions which might help to answer the question above:
Is executing a GameplayAbility the same like activating a GameplayAbility or have these terms different meanings? (both terms are used in documentation and function names/variables)
Is a GameplayAbility always considered as executing as soon as PreActivate() is called? (since that function blocks/cancels other abilities via ApplyAbilityBlockAndCancelTags and therefore - according to the second quote above - is considered as executing)
This would imply, that a GameplayAbility is considered as executing
independently from the outcome of CommitAbility()?
as long as UGameplayAbility::ActivateAbility() is executing?
How is a GameplayAbility detected as executing?
Is it enough to check for UGameplayAbility::bIsActive == true (for instanced GameplayAbilities)?
Or are executing abilities registered somewhere in UGameplayAbilitySystem?
...

Hypothesis
The terms activate/execute a GameplayAbility (GA) and active/executing GameplayAbility are used when the abilities influence each other with GameplayTags. So when designing, these terms are important for understanding the GameplayAbility interaction via tags. As a consequence, an:
activated/executed GA may block/cancel other GAs (via adding tags)
deactivated GA may unblock other GAs (via removing tags)
active/executing GA must be able to receive a block/cancel via tag changes, when another GA is executed
Conclusion
To make best usage of tags for controlling the interaction between GameplayAbilities, keep the following in mind: A GameplayAbility is
activated/executed when CallActivateAbility() is called (in blueprint, the Event ActivateAbility and Event ActivateAbilityFromEvent are triggered by that function via ActivateAbility()).
deactivated when EndAbility() is called (which is also called by CancelAbility()).
active/executing between the calls of UAbilitySystemComponent::TryActivateAbility() until EndAbility().
In case that EndAbility() is not called, the GA is executing for the lifetime of ActivateAbility() (again, in blueprints this is almost similar to the lifetime of the implemented Event ActivateAbility and Event ActivateAbilityFromEvent).
Explanation
When a GA is considered activated/executed and deactivated (1, 2)
UAbilitySystemComponent::ApplyAbilityBlockAndCancelTags() triggers GAs to be cancelled or blocked/unblocked. Therefore calls to it define the moment of activate/execute and deactivate.
Block/cancel are called by PreActivate() which is called by CallActivateAbility()
Unblock is called by EndAbility() (which is also called by CancelAbility())
When a GA is considered active/executing (3)
When it can be blocked
A GA can be blocked only during its activation. CanActivateAbility() checks (via DoesAbilitySatisfyTagRequirements()) the blocked tag list of the GameplayAbilitySystemComponent (ASC) (UAbilitySystemComponent::BlockedAbilityTags). If one of the tags is present there, the ability activation is aborted.
and when it can be cancelled (unregarded the fact if the GA allows it)
A GA can be cancelled by other GAs via tags as long as it is registered in the GameplayAbilitySystemComponent (ASC) in UAbilitySystemComponent::ActivatableAbilities and as long as that stored GA spec returns FGameplayAbilitySpec::IsActive() = true.
It gets added to that container via UAbilitySystemComponent::GiveAbility() and removed via UAbilitySystemComponent::ClearAbility()
FGameplayAbilitySpec::IsActive() will return true, as long as FGameplayAbilitySpec::ActiveCount > 0. This variable will be incremented upon GA activation (in UAbilitySystemComponent::InternalTryActivateAbility()) and decremented upon GA deactivation (in UAbilitySystemComponent::NotifyAbilityEnded).
UASC::InternalTryActivateAbility() is called in UASC::TryActivateAbility() and UASC::NotifyAbilityEnded() is called in EndAbility(). These two conditions are the reason that I considered a GA only as active/executing between those calls instead of between UASC::GiveAbility() and UASC::ClearAbility().
How those tags are triggered
When a GA is activated/deactivated, it triggers functions in the GameplayAbilitySystemComponent (ASC):
For un-/blocking
UAbilitySystemComponent::ApplyAbilityBlockAndCancelTags calls ...
UAbilitySystemComponent::BlockAbilitiesWithTags() respectively UAbilitySystemComponent::UnBlockAbilitiesWithTags(), which (finally) call for each of the tags in its list UAbilitySystemComponent::BlockedAbilityTags ...
FGameplayTagCountContainer::UpdateTagMap_Internal(), which adds respectively removes the tag
For cancelling
UAbilitySystemComponent::CancelAbilities() which compares the tags provided by the cancelling GA with the tags of all activateable abilities stored in the Ability System Component (in UAbilitySystemComponent::ActivatableAbilities()), which are currently active.
If there is a match, all of those GA instances are cancelled.
What about CommitExecute()
That function is independent from interacting with other GAs via tags, so the Execute part of the name can be misleading.
CommitExecute() is called only if the costs of the GA can be afforded and if the GA is not on cooldown (all of that happens in CommitAbility()). But blocking/cancelling other GAs already happened before and unblocking GAs also happens independently from that function.
So what about CommitAbility()
For an executing GA, it doesn't matter, if this is being called.
If CommitAbility() fails, the GA might be deactivated immediately after being activated, depending on the implementation of the GA. So it can have an influence on the execution duration, which however is unimportant for the design process of the GAs interaction via tags.
What about SetShouldBlockOtherAbilities()
This public function can be called to un-/block other GAs via tags from an arbitrary place in C++/blueprints.
E.g. it could be called several times during the lifetime of ActivateAbility() to un-/block other GAs. Furthermore, it is not able to cancel other GAs. It is not clear what are the best practices for using or avoiding this function. It seems to lower the transparency of GA interaction via tags.
(if no parent namespace is present, all functions are members of UGameplayAbility)
Other ways to block/unblock GAs (e.g. via inputID) or cancelling GAs (e.g. via UAbilitySystemComponent::CancelAllAbilities()) are not considered. This is about GA interaction via tags. Also triggering GAs via tags (or GameplayEvents) is another topic.
It would be nice if a designer of the Unreal GAS could clarify things, correct incomplete conclusions or leave some words about best practices.

Related

How to get around using Enter and Exit blocks in "Prepare" flowchart (Execution error "0 isn't supported for building resource behavior flowcharts")

I have an airlock (small room called AL_2216) between 2 areas. The airlock has many different agent types passing through it (cart, product, operator, etc). There are queuing areas on either side of the airlock.
Because the space is small, I built a short flowchart that has a queue and restricted area blocks that all agents must pass through when going through this space. If the restricted area's capacity is full, the agents wait in either the InsideQueueArea or OutsideQueueArea depending on the direction they're going.
I send agents via Exit and Enter blocks to this flowchart and it works great on the top portion of the flowchart.
BUT if I try to use an Enter or Exit block in the prepare flowchart, I get this error:
I tried using a custom block instead of Enter and Exit blocks, but that creates a new instance of the code each time and the restrictions don't work together across the multiple custom blocks.
This airlock is just one of many in my model. Without referring to the same code, I'll have multiple copies that need to refer to each other's restricted areas and the flowcharts become huge and complicated. Is there a way to get around this?
EDIT:
I'm not sure what to do with these ports. They have no properties that do anything:
EDIT2:
Here's a file to see the behavior - Model2.zip
The Prepared flowchart portion is set to "ignore" so the code will run. You can see the operators and the carts passing through AL_2216 with only 2 being allowed at a time. If you uncheck "ignore" for the prepare flowchart, the error will trigger.
AnyLogic sent the right answer!!
So I was asking Anylogic a different question and they recognized my name from this post! They sent a fix to me and it works exactly the way it should! The exception error message I was getting "out: 0 isn't supported for..." made me think the exit/enter blocks were not supported in perparation flowcharts.
But actually, the seizeCart block didn't know where to start the prep flowchart because it wasn't directly connected to the resource task start block. A quick setting change under the Advanced section of the seizeCart block defining which resource task start block to start at did the trick! Here's the email from AnyLogic:
-The error text and documentation are not sufficient for understanding this (the error text is confusing), I suppose it is obsolete error text. We will rectify the description;
-Under the question there is a more generic discussion which seems to be unrelated to the initial problem. Please let me know if I miss something or if your model does not work as you expect even after adjustment of seizeCart block property.
I think you should replace the Enter and Exit blocks that lead to the bottom input of your seizeCart Seize block with simple Port objects (from the Agent palette).
As per the help for Seize:
So it wants a direct link to a ResourceTaskStart flow and your Enter/Exit combinations might be ... not "direct" enough... Try it.
So here's what I ended up doing. It's the best I could come up with that could be easily replicated for lots of airlocks.
I've added a wait block (dummyThruAL_2216) to my Product flowchart prior to seizing the cart. This wait block injects a new Agent into sourceDummy at the cartHome node. The dummy then seizes a cart and moves through the airlock and it's restriction. Upon exiting the restriction, I check what type of agent and direct the agent to the correct exit block. The dummy agent and cart move to the Product where the dummy agent releases the cart and sinks. The sink frees the wait block and the Product seizes the cart that is right next to it and continues on it's journey.
It's an easy copy/paste to add more airlocks. Not as nice as my original, but what are you going to do... Thanks for everyone's help and suggestions.
As others have said, there are (not really documented) restrictions on what blocks you can use in preparation and wrap-up flowcharts, which mean what you're attempting won't work.
As you say, it's important to keep a single 'instance' of the airlock flow so that the restrictions (queue and restricted area) are 'global' when this represents the same physical airlock. (Otherwise a repeated custom block is precisely what you should use for each different physical airlock.)
Your best option (and assuming you needed to attach the Cart resource to the Product) is probably to
Add dummy agents (via Source block inject calls) to a separate mini-process that represents your resource preparation requirement (but now not attached to the Seize block).
Replace the Seize in your main process with a Seize-Wait-Release-Seize combination:
The Seize block seizes the cart as normal (without moving or attaching it; no 'Send seized resources' or 'Attach seized resources' options) and then injects an agent into your mini-process (which can use Exit and Enter blocks to use the airlock sub-process). This agent represents the seized resource agent (Cart) and thus should start where it starts and be animated so it looks like it. (You can make the actual Cart temporarily non-visible during this mini-process.)
When the agent reaches the end of the mini-process (at a Sink block), instantly move the related Cart to your node (use jumpTo), make it visible again and free the Product agent from the Wait block
Release the seized Cart and then immediately re-Seize it, but now attaching it (so the animation looks correct). If you use the Resource selection 'Nearest to the agent' option you should be guaranteed to seize the correct cart. (You can also use the 'Customise resource choice' option with some code to ensure that you absolutely always choose the same Cart.)
(It is simpler than the above if you don't care about having a correct animation, and you can use custom blocks to make this block combination reusable and thus not too clunky.)
Edit: A very similar alternative which also works (and is the basis for your own answer) is to have a dummy agent representing your Product in the sub-flow which seizes (and attaches) the actual Cart agent, leaving it at the Product's location to be immediately seized as above. This is slightly better since you don't have to worry about the visibility and 'jumping' of the real resource agent, plus you can move a Seize and a Release from the main flow (which now just has Wait-Seize) to the sub-flow (thus 'hiding them away').

How I can match bags an passenger in the reclaim area?

I'm simulating a security control process, and i can't do that each passenger pickup their baggage. I have tried with Match, Combine, Pickup, but I still can't execute the commands correctly.
I've created the follow flowchart, and the problem is in the wReclaimPax, pickup and wReclaimBags blocks (you can see them in the picture).
https://ibb.co/v3V57Tm
I saw this link Anylogic - Combined multiple items back to original owner to understand something, but I still need help.
I've created 3 functions:
isMatch:
if(equipaje.pasajeroLink.equals(pasajero.equipajeLink)){
return true;
}
return false;
paxBags:
for(int i=0;i<wait.size();i++){
Pasajero p=(Pasajero)wait.get(i);
if(isMatch(p,bag))
return p;
}
return null;
bagsPax:
for(int i=0;i<wait.size();i++){
Equipaje e=(Equipaje)wait.get(i);
if(isMatch(pasajero,e))
return e;
}
return null;
Assumed context
You haven't really explained how your code is related to your process but I'm assuming the following:
Because this is luggage-retrieval, you want to ensure that a passenger
agent (Pasajero) only enters the Pickup block (representing taking bag from
carousel) when his bag (Equipaje agent by the look of it) has
arrived into the wReclaimBag Wait, and been released from it to
queue4 Queue.
For this you need triggers (to remove agents from Wait blocks) when
either a passenger (Pasajero) arrives in wReclaimPax Wait, or a bag (Equipaje) arrives
in the wReclaimBag Wait (because you don't know whether the passenger or their bag will get to their respective Wait blocks first).
So your paxBags function is called in on-entry action of the wReclaimBag Wait, and your bagsPax function in the on-entry action of the wReclaimPax Wait.
Possible problems with current approach
Without knowing more of your model it's hard to say but problems I can think of based on what you've supplied are:
Your functions return the Pasajero or Equipaje if there is one that matches. Your match check relies seemingly on bidirectional connections (links) between Pasajero and Equipaje. Obviously if they're not setup properly the model won't work and, if you're using bidirectional connections you shouldn't need to check both ends.
Your functions need calling so that, if they return non null, they then free the matching agent from the other Wait block, and free themselves. Are you doing that? Without checking, there may be issues with calling free for yourself as you enter a Wait block (since this kind of depends on AnyLogic internals as to whether you count as being 'in' the block at this stage and can be freed). If this seems to be the problem you could create a timeout 0 dynamic event instance to do the free so that you're not doing it within the scope of the on-enter action.
Your pickup block (since it's been setup so that the entering agent will always want to pickup the first agent (Equipaje) in queue4) just needs to be set as waiting for quantity 1 (though see below).
If you've done all this the most likely problem is that the underlying events ordering of AnyLogic is affecting things. When you free agents I'm fairly sure the freeing actually happens in a timeout 0 event scheduled under-the-covers. So it may be that the passenger arrives at the Pickup before their Equipment arrives in queue4 though, if you set the Pickup to be "Exact quantity (wait for)", with quantity of 1, it should handle that.
The animation of the process (numbers in/out/within each block and details when clicking on blocks) should also help you debug what is going wrong; e.g., are bags being left in the Wait when they should have been released, etc.
P.S. With this kind of thing you should always create a minimal example model to make testing the issue/solution easier (and for sharing in help forums such as this where the rest of the complexity of your model is irrelevant). Often you find the problem 'naturally' in the process of trying to construct such a model that reproduces your problem in a minimal way.

Can we edit callback function HAL_UART_TxCpltCallback for our convenience?

I am a newbie to both FreeRTOS and STM32. I want to know how exactly callback function HAL_UART_TxCpltCallback for HAL_UART_Transmit_IT works ?
Can we edit that that callback function for our convenience ?
Thanks in Advance
You call HAL_UART_Transmit_IT to transmit your data in the "interrupt" (non-blocking) mode. This call returns immediately, likely well before your data gets fully trasmitted.
The sequence of events is as follows:
HAL_UART_Transmit_IT stores a pointer and length of the data buffer you provide. It doesn't perform a copy, so your buffer you passed needs to remain valid until callback gets called. For example it cannot be a buffer you'll perform delete [] / free on before callbacks happen or a buffer that's local in a function you're going to return from before a callback call.
It then enables TXE interrupt for this UART, which happens every time the DR (or TDR, depending on STM in use) is empty and can have new data written
At this point interrupt happens immediately. In the IRQ handler (HAL_UART_IRQHandler) a new byte is put in the DR (TDR) register which then gets transmitted - this happens in UART_Transmit_IT.
Once this byte gets transmitted, TXE interrupt gets triggered again and this process repeats until reaching the end of the buffer you've provided.
If any error happens, HAL_UART_ErrorCallback will get called, from IRQ handler
If no errors happened and end of buffer has been reached, HAL_UART_TxCpltCallback is called (from HAL_UART_IRQHandler -> UART_EndTransmit_IT).
On to your second question whether you can edit this callback "for convenience" - I'd say you can do whatever you want, but you'll have to live with the consequences of modifying code what's essentially a library:
Upgrading HAL to newer versions is going to be a nightmare. You'll have to manually re-apply all your changes you've done to that code and test them again. To some extent this can be automated with some form of version control (git / svn) or even patch files, but if the code you've modified gets changed by ST, those patches will likely not apply anymore and you'll have to do it all by hand again. This may require re-discovering how the implementation changed and doing all your work from scratch.
Nobody is going to be able to help you as your library code no longer matches code that everyone else has. If you introduced new bugs by modifying library code, no one will be able to reproduce them. Even if you provided your modifications, I honestly doubt many here will bother to apply your changes and test them in practice.
If I was to express my personal opinion it'd be this: if you think there's bugs in the HAL code - fix them locally and report them to ST. Once they're fixed in future update, fully overwrite your HAL modifications with updated official release. If you think HAL code lacks functionality or flexibility for your needs, you have two options here:
Suggest your changes to ST. You have to keep in mind that HAL aims to serve "general purpose" needs.
Just don't use HAL for this specific peripheral. This "mixed" approach is exactly what I do personally. In some cases functionality provided by HAL for given peripheral is "good enough" to serve my needs (in my case one example is SPI where I fully rely on HAL) while in some other cases - such as UART - I use HAL only for initialization, while handling transmission myself. Even when you decide not to use HAL functions, it can still provide some value - you can for example copy their IRQ handler to your code and call your functions instead. That way you at least skip some parts in development.

Clarification about Scala Future that never complete and its effect on other callbacks

While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).

What is the difference between hook and callback?

By reading some text, especially the iOS document about delegate, all the protocol method are called hook that the custom delegate object need to implement. But some other books, name these hook as callback, what is the difference between them? Are they just different name but the same mechanism? In addition to Obj-C, some other programming languages, such as C, also got the hook, same situation with Obj-C?
The terminology here is a bit fuzzy. In general the two attempt to achieve similar results.
In general, a callback is a function (or delegate) that you register with the API to be called at the appropriate time in the flow of processing (e.g to notify you that the processing is at a certain stage)
A hook traditionally means something a bit more general that serves the purpose of modifying calls to the API (e.g. modify the passed parameters, monitor the called functions). In this meaning it is usually much lower level than what can be achieved by higher-level languages like Java.
In the context of iOS, the word hook means the exact same thing as callback above
Let me chime in with a Javascript answer. In Javascript, callbacks, hooks and events are all used. In this order, they are each higher level concepts than the other.
Unfortunately, they are often used improperly which leads to confusion.
Callbacks
From a control flow perspective, a callback is a function, usually given as an argument, that you execute before returning from your function.
This is usually used in asynchoronous situations when you need to wait for I/O (e.g. HTTP request, a file read, a database query etc.). You don't want to wait with a synchronous while loop, so other functions can be executed in the meantime.
When you get your data, you (permanently) relinquish control and call the callback with the result.
function myFunc(someArg, callback) {
// ...
callback(error, result);
}
Because the callback function may be some code that hasn't been executed yet, and you don't know what's above your function in the call stack, generally instead of throwing errors you pass on the error to the callback as an argument. There are error-first and result-first callback conventions.
Mostly callbacks have been replaced by Promises in the Javascript world and since ES2017+, you can natively use async/await to get rid of callback-rich spaghetti code and make asynchronous control flow look like it was synchronous.
Sometimes, in special cascading control flows you run callbacks in the middle of the function. E.g. in Koa (web server) middleware or Redux middleware you run next() which returns after all the other middlewares in the stack have been run.
Hooks
Hooks are not really a well-defined term, but in Javascript practice, you provide hooks when you want a client (API/library user, child classes etc.) to take optional actions at well-defined points in your control flow.
So a hook may be some function (given as e.g. an argument or a class method) that you call at a certain point e.g. during a database update:
data = beforeUpdate(data);
// ...update
afterUpdate(result);
Usually the point is that:
Hooks can be optional
Hooks usually are waited for i.e. they are there to modify some data
There is at most one function called per hook (contrary to events)
React makes use of hooks in its Hooks API, and they - quoting their definition - "are functions that let you “hook into” React state and lifecycle features", i.e. they let you change React state and also run custom functions each time when certain parts of the state change.
Events
In Javascript, events are emitted at certain points in time, and clients can subscribe to them. The functions that are called when an event happens are called listeners - or for added confusion, callbacks. I prefer to shun the term "callback" for this, and use the term "listener" instead.
This is also a generic OOP pattern.
In front-end there's a DOM interface for events, in node.js you have the EventEmitter interface. A sophisticated asynchronous version is implemented in ReactiveX.
Properties of events:
There may be multiple listeners/callbacks subscribed (to be executed) for the same event.
They usually don't receive a callback, only some event information and are run synchronously
Generally, and unlike hooks, they are not for modifying data inside the event emitter's control flow. The emitter doesn't care 'if there is anybody listening'. It just calls the listeners with the event data and then continues right away.
Examples: events happen when a data stream starts or ends, a user clicks on a button or modifies an input field.
The two term are very similar and are sometimes used interchangably. A hook is an option in a library were the user code can link a function to change the behavior of the library. The library function need not run concurrent with the user code; as in a destructor.
A callback is a specific type of hook where the user code is going to initiate the library call, usually an I/O call or GUI call, which gives contol over to the kernel or GUI subsystem. The controlling process then 'calls back' the user code on an interupt or signal so the user code can supply the handler.
Historically, I've seen hook used for interupt handlers and callback used for GUI event handlers. I also see hook used when the routine is to be static linked and callback used in dynamic code.
Two great answers already, but I wanted to throw in one more piece of evidence the terms "hook" and "callback" are the same, and can be used interchangeably: FreeRTOS favors the term "hook" but recognizes "callback" as an equivalent term, when they say:
The idle task can optionally call an application defined hook (or callback) function - the idle hook.
The tick interrupt can optionally call an application defined hook (or callback) function - the tick hook.
The memory allocation schemes implemented by heap_1.c, heap_2.c, heap_3.c, heap_4.c and heap_5.c can optionally include a malloc() failure hook (or callback) function that can be configured to get called if pvPortMalloc() ever returns NULL.
Source: https://www.freertos.org/a00016.html