deleting a leaf with default value (yang) - ietf-netmod-yang

say I have this
container c {
leaf l1;
leaf l2 (default 'abcd');
}
and I do this (restconf):
DELETE /c/l2
what is the expected behavior in the server ?
is it
'delete the leaf data' or
'do not delete but preserve the leaf with
default value'
after issuing the delete , what is the expected result for a GET
GET /c
c {
l1 : 100 // for ex
l2 : 'abcd'
}

This is described in RFC7950, Section 7.6.1:
The default value of a leaf is the value that the server uses if the
leaf does not exist in the data tree. The usage of the default value
depends on the leaf's closest ancestor node in the schema tree that
is not a non-presence container (see Section 7.5.1):
o If no such ancestor exists in the schema tree, the default value
MUST be used.
o Otherwise, if this ancestor is a case node, the default value MUST
be used if any node from the case exists in the data tree or the
case node is the choice's default case, and if no nodes from any
other case exist in the data tree.
o Otherwise, the default value MUST be used if the ancestor node
exists in the data tree.
In these cases, the default value is said to be in use.
Note that if the leaf or any of its ancestors has a "when" condition
or "if-feature" expression that evaluates to "false", then the
default value is not in use.
When the default value is in use, the server MUST operationally
behave as if the leaf was present in the data tree with the default
value as its value.
If a leaf has a "default" statement, the leaf's default value is the
value of the "default" statement. Otherwise, if the leaf's type has
a default value and the leaf is not mandatory, then the leaf's
default value is the type's default value. In all other cases, the
leaf does not have a default value.
In your case c is a non-presence container, therefore the first bullet above kicks in. This means your default will be in use if you delete the corresponding leaf from the data tree (yes, you can delete it). The server MUST therefore operationally behave as if the leaf was present, and this leaf must have the specified default value.
It does not matter which protocol is used to do the operations.
For RESTCONF and GET, the behavior is described in Section 3.5.4:
RESTCONF requires that a server report its default handling mode (see
Section 9.1.2 for details). If the optional "with-defaults" query
parameter is supported by the server, a client may use it to control
retrieval of default values (see Section 4.8.9 for details).
If a leaf or leaf-list is missing from the configuration and there is
a YANG-defined default for that data resource, then the server MUST
use the YANG-defined default as the configured value.
If the target of a GET method is a data node that represents a leaf
or leaf-list that has a default value, and the leaf or leaf-list has
not been instantiated yet, the server MUST return the default
value(s) that are in use by the server. In this case, the server
MUST ignore its basic-mode, described in Section 4.8.9, and return
the default value.
If the target of a GET method is a data node that represents a
container or list that has any child resources with default values,
for the child resources that have not been given value yet, the
server MAY return the default values that are in use by the server,
in accordance with its reported default handing mode and query
parameters passed by the client.
So your GET example may or may not be correct, depending on which defaults handling mode is in effect, as the last paragraph above suggests.

Related

Gtk.TreeModelFilter, no automatic way to show parent when using VisibleFunc?

It seems that there are two mutually-exclusive ways to use filter:
Setting a function to determine visibility.
Setting a boolean column to denote visibility.
The problem of (1) is that if any of the ancestors does not match the function, matching children will not show. For example, if the VisibleFunc returns true when the current node's name contains "b" for the following tree, only "bc" will be shown, and "ab" will not, because its parent "a" does not have "b".
a
ab
ac
b
bc
I think that in most use cases, this would not be something the user wants. I saw an existing question, and the answer was using (2) instead. Basically, what the answer does seemed to be manually traversing the tree, and if the current node matches the criteria, iterating up to the root node and changing the visibilities of all ancestors. It would work, but it seems kind of awkward, because it needs to modify the TreeStore itself and doing manual traversals.
Since I think it would be a common use case to show a matching child node, even if not all of its ancestors match, doesn't (1) have any option for this? Like, keep applying the VisibleFunc to all the descendants of a node that does not match the function, anyway, and automatically make all ancestors visible? Or, is using (2) the only way?

Unique symbol value on type level

Is it possible to have some kind of unique symbol value on the type level, that could be used to distinct (tag) some record without the need to supply a unique string value?
In JS there is Symbol often used for such things. But I would like to have it without using Effect, in pure context.
Well, it could even like accessing Full qualified module name (which is quite unique for the task), but I'm not sure if this is a really relevant/possible thing in the Purescript context.
Example:
Say There is some module that exposes:
type Worker value state =
{ tag :: String
, work :: value -> state -> Effect state
}
makeWorker :: forall value state. Worker value state
performWork :: forall value state. woker -> Worker value state -> value -> Unit
This module is used to manage the state of workers, it passes them value and current state value, and gets Effect with new state value, and puts in state map where keys are tags.
Users of the module:
In one module:
worker = makeWorker { tag: "WorkerOne", work }
-- Then this tagged `worker` is used to performWork:
-- performWork worker "Some value"
In another module we use worker with another tag:
worker = makeWorker { tag: "WorkerTwo", work }
So it would be nice if there would be no need to supply a unique string ("WorkerOne", "WorkerTwo") as a tag but use some "generated" unique value. But the task is that worker should be created on the top level of the module in pure context.
Semantics of PureScript as such is pure and pretty much incompatible with this sort of thing. Same expression always produces same result. The results can be represented differently at a lower level, but in the language semantics they're the same.
And this is a feature, not a bug. In my experience, more often than not, a requirement like yours is an indication of a flawed design somewhere upstream.
An exception to this rule is FFI: if you have to interact with the underlying platform, there is no choice but to play by that platform's rules. One example I can give is React, which uses the JavaScript's implicit object identity as a way to tell components apart.
So the bottom line is: I urge you to reconsider the requirement. Chances are, you don't really need it. And even if you do, manually specified strings might actually be better than automatically generated ones, because they may help you troubleshoot later.
But if you really insist on doing it this way, good news: you can cheat! :-)
You can generate your IDs effectfully and then wrap them in unsafePerformEffect to make it look pure to the compiler. For example:
import Effect.Unsafe (unsafePerformEffect)
import Data.UUID (toString, genUUID)
workerTag :: String
workerTag = toString $ unsafePerformEffect genUUID

Unreal GAS: Influence of the GameplayEffect aggregator on gameplay attribute values

During analyzing the lifetime of a GameplayEffect modifier and execution, I've stumbled across attribute aggregators or GameplayEffect aggregators (both terms are used in source code). These are used to evaluate modified attribute values (they are modified by GameplayEffects).
However, I don't understand how those aggregators influence the actual GameplayAbilitySystem attributes which are attached (as part of an AttributeSet) to the actor:
Does an attribute/GameplayEffect aggregator FAggregator influence the base value or the current value of a gameplay attribute FGameplayAttributeData?
Is the base value of an attribute/GameplayEffect aggregator float FAggregator::BaseValue related to the base value of a gameplay attribute float FGameplayAttributeData::BaseValue?
The vital components of attribute/GameplayEffect aggregators are
so called gameplay modifier evaluation channels EGameplayModEvaluationChannel, which are used sequentially during value evaluation (the result of channel0 is passed as base value to channel1 etc)
storing modifiers (with its magnitude, operation, tags and link to the applying GameplayEffect) in certain channels, which define the actual numerical evaluation
Those are used for doing the evaluation of
a final value
a base value by evaluating a final value in reverse, attempting to determine the base value from the modifiers (deprecated b/c GAS now has struct-based attributes - according to documentation)
a bonus value (final value - base value)
(all of them are just return values of functions and are not member variables of the aggregator)
To notify other classes of an evaluation (or changes to the aggregator), two methods are used
a delegate FOnAggregatorDirty is broadcasted which contains a reference to the aggregator
every GameplayEffect, which is registered in the AbilitySystemComponent, updates the change to its affected attribute(s) (via FActiveGameplayEffectsContainer::UpdateAggregatorModMagnitudes()) by updating the aggregator FAggregator for the attribute (which is determined or set via FindOrCreateAttributeAggregator()) in FAggregator::UpdateAggregatorMod())
I don't see, how one or both of those notification methods update the actual attribute values.
(The official documentation/source code as well as the excellent GAS: Comprehensive Analysis and GAS and you unfortunately don't shed light on GameplayEffect aggregators.)
The attribute/GameplayEffect aggregator influences the current value of a gameplay attribute.
Partly yes. They are related in one direction: the base value of the gameplay attribute is used to set the base value of the attribute/GameplayEffect aggregator, but not the other way around. The aggregator does not change the attribute base value.
Explanation for (1)
I was on the right track by looking at the notification methods. In fact, both in conjunction are updating the gameplay attribute:
FActiveGameplayEffectsContainer::FindOrCreateAttributeAggregator() applies UAbilitySystemComponent::OnAttributeAggregatorDirty() to the OnDirty delegate (that delegate is executed when the aggregator changes as written in the question).
OnAttributeAggregatorDirty() calls FActiveGameplayEffectsContainer::InternalUpdateNumericalAttribute(), which calls
UAbilitySystemComponent::SetNumericAttribute_Internal(), which calls
FGameplayAttribute::SetNumericValueChecked(). That sets the current value of a gameplay attribute.
Explanation for (2)
The base value of the attribute/GameplayEffect aggregator is set only using the gameplay attribute base value FGameplayAttributeData::BaseValue in:
FActiveGameplayEffectsContainer::OnAttributeAggregatorDirty()
FActiveGameplayEffectsContainer::SetAttributeBaseValue()
The base value of the gameplay attribute is set by:
UAttributeSet::InitFromMetaDataTable()
FActiveGameplayEffectsContainer::SetAttributeBaseValue()
In both functions, the attribute base value has no relation to the base value of the aggregator.

leafref require-instance allows to carry non existing values

want to get clarity on the following:
say, (omitting key for brevity)
list l1 {
leaf lx {
leafref /x/y;
require-instance false;
mandatory false;
}
}
because mandatory is false,
I can have a l1 instance without the leaf lx.
because require-instance is false, i can have a list instance with lx of any value (of valid type) whether a corresponding /x/y exists or not.
And, suppose the leaf is mandatory;
list l1 {
leaf lx {
leafref /x/y;
require-instance false;
mandatory true;
}
}
now,
list instance must carry the leaf lx. And any value (of correct type) is valid because require-instance is false.
correct ?
when should I use this facility ?
If require-instance is set to false then the value space of the leafref node is the same as the value space of the referred node. This may be useful if the value space of the referred node is particulary complex, has lots of restrictions etc. So, basically, module creators can reuse what they have defined earlier.
If require-instance is set to true (or omitted) then when there are no leaf instances that specify the value-space of a leafref node then it's value space is empty. Thus, you cannot create a valid instance of a leafref node because there are no possible values for it.
Below is the relevant part of the YANG 1.1 RFC 7950:
9.9. The leafref Built-In Type
The leafref built-in type is restricted to the value space of some leaf or leaf-list node in the schema tree and optionally further restricted by corresponding instance nodes in the data tree. The "path" substatement (Section 9.9.2) is used to identify the referred leaf or leaf-list node in the schema tree. The value space of the referring node is the value space of the referred node.
and a fragment about the require-instance statement:
9.9.3. The "require-instance" Statement
[...] If this statement is not present, it defaults to "true".
If "require-instance" is "true", it means that the instance being referred to MUST exist for the data to be valid. This constraint is enforced according to the rules in Section 8.
If "require-instance" is "false", it means that the instance being referred to MAY exist in valid data.
IMHO this part of the YANG 1.1 RFC 7950 is a bit misleading. First, it says that the value space of the referring node is the value space of the referred node but later on it says that there is an instance existence restriction by default. This means that, by default, the value space od the referring node is a set of leaf instance values of the referred node and not the complete value space of that node.

weird object returned by vector_indexing_suite

I have a
std::vector<const T*>
that I return from a c++ function:
getallTs()
I have exposed the T class with:
class_<T,T*>
and the vector like so:
class_<std::vector<const T*> >("TsList")
.def(vector_indexing_suite<std::vector<const T*>,true>())
;
What does the NoProxy argument mean?
I expose the function like so:
def("getallTs", getallTs,
return_value_policy<return_by_value>{});
I observe a weird behaviour.
When I call from python
tlist = getallTs()
I get a TsList object.
len(tlist)
works.
tlist[<anycorrectindex>].<someattribute>
also works.
However, if I just
print(tlist[0])
and
print(tlist[100])
python prints
object T at <address>
This address is the same for all the Ts in tlist.
Also, I cannot iterate over Tlist with a python for loop.
for t in tlist:
doesn't work.
Any ideas what is wrong with the way I am exposing the vector and the function to python?
I understand the python objects that each wrap a c++ T hold a raw pointer to T.
These T instances exist throughout the process in a global table.
The c++ function retunrns a vector of pointers to those instances.
What does indexing_suite do with those?
Thanks,
When accessing elements by index, the indexing suite defaults to providing a proxy to the element, as a means to provide reference semantics for mutable types that Python users will often expect with collections:
val = c[i]
c[i].m() # Mutates state, equivalent to `val.m()`
assert(val == c[i]) # Have same state.
val.m()
assert(val == c[i]) # Have same state.
In the above example, val is a proxy object that is aware of the container element. When NoProxy is true, one gets value semantics when indexing, resulting in a copy on each index access.
val = c[i] # val is a copy.
c[i].m() # Modify a copy of c[i].
assert(val == c[i]) # These have the same state because c[i] returns a new copy.
val.m()
assert(val != c[i]) # These do not have the same state.
When proxies are not used, the mutations to the elements will only persists when invoked on a reference to the element, such as during iteration:
for val in c:
val.m() # modification observed in c[#]
When invoking print(c[i]), a temporary proxy object is created and passed to print, and the lifetime of the proxy object ends upon returning from print(). Hence, the memory and identification used by the temporary proxy object may be re-used. This can result in elements appearing to have the same identification:
id0 = id(c[0]) # id of the temporary proxy
id1 = id(c[1]) # id of another temporary proxy
assert(id0 ?? id1) # Non-deterministic if these will be the same.
assert(c[0] is not c[1]) # Guaranteed to not be the same.
On the other hand, during the lifetime of a proxy, other proxies to the same element will have identical identification, and proxies to different elements will have different identification:
c0 = c[0] # proxy to element 0.
c0_2 = c[0] # another proxy to element 0.
c1 = c[1] # proxy to element 1
assert(c0 is c0_2)
assert(c0 is c[0])
assert(c0 is not c1)
In the situation where T has been exposed as being held by T*, iteration over std::vector<const T*> will fail in Python if there is no to-Python conversion for const T* to a Python object. Exposing class T as being held by T* registers automatic to-Python and from-Python conversions for T*, not const T*. When iterating over the collection in Python, references to elements are returned, resulting in a Python object failing to be constructed from a const T*. On the other hand, when accessing elements via index, the resulting Python object is either a proxy or a copy, which can use the existing converters. To resolve this, consider either:
having std::vector<>'s element type be the same as T's held type
explicitly registering a const T* to-Python converter