How to read completion criteria from moodle API to use in App? - moodle

I am using core_course_get_contents I want to know the activity restriction criteria.
I am getting something like this in availability
{\"op\":\"&\",\"showc\":[true],\"c\":[{\"type\":\"completion\",\"cm\":10889,\"e\":1}]}
{\"op\":\"&\",\"showc\":[true],\"c\":[{\"type\":\"completion\",\"cm\":9989,\"e\":1}]}
{\"op\":\"&\",\"c\":[{\"type\":\"grade\",\"id\":3410,\"min\":100}
How to read this? What does this mean?
Is this always different parameters in others?
What is the common structure of availability parameter?

What you get is a JSON string, with key/value pairs. This string tells you what are the availability conditions to be satisfied.
The first pair is "op": &
It means: the boolean operator is an AND. Another value could have been OR.
The operator tells you how the availability conditions should relate each other: either should all of them be valid (AND) or at least one of them (OR).
The second pair is "showc": true.
It means, I suppose (not sure): Show the availability conditions: true. Another value could have been, of course, false.
The third pair is: "c" (condition): array
The "c" key gives you, as far as I understand, a detailed description of the availability conditions. Let's go into details, here:
The "type" key tells you what type of field you are going to evaluate: in your first and second example it is a course module id ("cm"), with values, respectively, of 10889 and 9989. It means: "what follows has to do with course module 10889".
I do not know what the key/value pair "e":1 means, though. It could mean: "this course-module should be completed". Try yourself: you could change the availability conditions of some course-module and see what happens.
By the way, you can read this JSON object directly from the field availability of your DB table mdl_course_modules (or your_prefix_course_modules).
In your third example the type is a grade ("grade"), the ID of the grade is "3410", and the "min" (I suppose the minimum value) is "100".
Note that there could be other types: for example: "type": "date", or "type": "grouping". I am not aware of a list of possible types available, though.

Related

Right way to use data type 'F' in SELECT-OPTIONS?

I want to have a SELECT-OPTIONS field in ABAP with the data type FLTP, which is basically a float. But this is not possible using SELECT-OPTIONS.
I tried to use PARAMETERS instead which solved this issue. But now of course I get no results when using this parameter value in the WHERE clause when selecting.
So on the one side I can't use data type 'F', but on the other side I get no results. Is there any way out of this dilema?
Checking floating point values for exact equality is a bad idea. It works in some edge-cases (like 0), but often it does not work. The reason is that not every value the user can express in decimal notation can also be expressed as a floating point value. So the values get rounded internally and now you get inequality where you would expect equality. Check the website "What Every Programmer Should Know About Floating-Point Arithmetic" for more information on this phenomenon.
So offering a SELECT-OPTION or a single PARAMETER to SELECT floating point values out of a table might be a bad idea.
What I would recommend instead is have the user state a range between two values with both fields obligatory:
PARAMETERS:
p_from TYPE f OBLIGATORY,
p_to TYPE f OBLIGATORY.
SELECT somdata
FROM table
WHERE floatfield >= p_from AND floatfield <= p_to.
But another solution you might want to consider is if float is really the appropriate data-type for your situation. When the table is a Z-table, you might want to consider to change the type of that field to a packed number or one of the decfloat flavors, as those will cause you far fewer surprises.

How can I use key/value dashboard variables in Grafana + InfluxDB?

I’m trying to suss out how to format my key/value pair dashboard variable. I’ve got a variable whose definitions are:
sensor_list = 4431,8298,11041,13781
sensor_kv = 4431 : Storage,8298 : Stairs,11041 : Closet,13781 : Attic
However, I can't seem to use it effectively for queries and dashboard formatting with InfluxDB. For example, I've got a panel whose query is this:
SELECT last("battery_ok") FROM "autogen"."Acurite-Tower" WHERE ("id" =~ /^$sensor_list$/) AND $timeFilter GROUP BY time($__interval) fill(null)
That works, but if I replace it with the KV, I can't get the value:
SELECT last("battery_ok") FROM "autogen"."Acurite-Tower" WHERE ("id" =~ /^$sensor_kv$/) AND $timeFilter GROUP BY time($__interval) fill(null)
^ that comes back with no data.
I'm also at a loss as to how to access the value of the KV pair in, say, the template values for a repeating panel. ${sensor_kv:text} returns the word "All" but ${sensor_kv:value} actually causes a straight up error: "Error: Variable format value not found"
My goal here is twofold:
To use the key side of the kv map as the ID to query from in the DB
To use the value side as the label of the stat panel and also as the alias of the measurement if I'm querying in a graph
I’ve read the formatting docs and all they mention are lists; there are no key/value examples on there, and certainly none that do this. It’s clearly a new-ish feature (here is the GH issue where its implementation is merged) so I’m hoping there’s just a doc miss somewhere.
In PR that you linked there is a tiny comment that key/value pair has to contain spaces.
So when you're defining a pairs in Values separated by comma it should be like
key1 : value1, key2 : value2
These will not work
key1:value1, key2:value2
key1 :value1, key2 :value2
key1: value1, key2: value2
Let's say that name of the custom variable is var1
Then you can access the key by ${var1} ,$var1, ${var1:text} or [[var1:text]]
(some datasources will be satisfied with $var1 - some will understand only ${var1:text})
And you can access the value by ${var1:value} [[var1:value]]
Tested in Grafana 8.4.7
I realise this might not be all the information you're after, but hope it will be useful. I came across this question when trying to implement something similar myself (also using InfluxDB), and I have managed to access both keys and values in a query
My query looks like this:
SELECT
"Foo.${VariableName:text}.Bar.${VariableName:value}"
FROM "db"
WHERE (filters, filters) AND $timeFilter GROUP BY "bas"
So as you see, my use case was a bit different from what you're trying to achieve, but it demonstrates that it's basically possible to access both the key and the value in a query.
Key/values are working with some timeseries DB where it makes sense, e.g. MySQL https://grafana.com/docs/grafana/latest/datasources/mysql/:
Another option is a query that can create a key/value variable. The query should return two columns that are named __text and __value. The __text column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value.
But that's not a case for InfluxDB: https://grafana.com/docs/grafana/latest/datasources/influxdb/ InfluxDB can't return key=>value result - it returns only timeseries (that's not a key=>value) or only values or only keys.
Workarounds:
1.) Use supported DB (MySQL, PostgreSQL) just to have correct key=>value results. You really don't need to create table for that, just combination of SELECT, UNION, ... and you will get desired result.
2.) Use hidden variable which will be "translating" value to key, which will be used then in the query. E.g. https://community.grafana.com/t/how-to-alias-a-template-variable-value/10929/3
Of course everything has pros and cons, for example multi value variable values may not work as expecting.

Why does pyang validation allow to define a list without a valid key if the list is in a grouping?

The RFC6020 says:
The "key" statement [...] takes as an argument a
string that specifies a space-separated list of leaf identifiers of
this list. [...] Each such leaf identifier MUST refer to a child leaf of the
list. The leafs can be defined directly in substatements to the
list, or in groupings used in the list.
Despite this fact it is possible to successfully validate the below grouping in pyang:
grouping my-grouping {
list my-list-in-a-grouping {
key there-is-no-such-leaf;
}
}
If the list is outside of a grouping, or if I use the grouping without any augmentations, then I get an error (which is expected):
error: the key "there-is-no-such-leaf" does not reference an existing leaf
What is the point of having groupings that require augmentations in order to be used?
According to Martin Bjorklund, an author of the related RFCs, this is not valid YANG. Pyang fails to detect this due to a bug in its implementation. The RFC text which you quoted in your question does not permit any other interpretation and appears to be intentional. Groupings were never meant to be used in such a way.
Could it be because grouping is not a data definition node and pyang validates only such nodes?
The grouping statement is not a data
definition statement and, as such, does not define any nodes in
the schema tree.
RFC6020

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).

How to query Cassandra by date range

I have a Cassandra ColumnFamily (0.6.4) that will have new entries from users. I'd like to query Cassandra for those new entries so that I can process that data in another system.
My sense was that I could use a TimeUUIDType as the key for my entry, and then query on a KeyRange that starts either with "" as the startKey, or whatever the lastStartKey was. Is this the correct method?
How does get_range_slice actually create a range? Doesn't it have to know the data type of the key? There's no declaration of the data type of the key anywhere. In the storage_conf.xml file, you declare the type of the columns, but not of the keys. Is the key assumed to be of the same type as the columns? Or does it do some magic sniffing to guess?
I've also seen reference implementations where people store TimeUUIDType in columns. However, this seems to have scale issues as this particular key would then become "hot" since every change would have to update it.
Any pointers in this case would be appreciated.
When sorting data only the column-keys are important. The data stored is of no consequence neither is the auto-generated timestamp. The CompareWith attribute is important here. If you set CompareWith as UTF8Type then the keys will be interpreted as UTF8Types. If you set the CompareWith as TimeUUIDType then the keys are automatically interpreted as timestamps. You do not have to specify the data type. Look at the SlicePredicate and SliceRange definitions on this page http://wiki.apache.org/cassandra/API This is a good place to start. Also, you might find this article useful http://www.sodeso.nl/?p=80 In the third part or so he talks about slice ranging his queries and so on.
Doug,
Writing to a single column family can sometimes create a hot spot if you are using an Order-Preserving Partitioner, but not if you are using the default Random Partitioner (unless a subset of users create vastly more data than all other users!).
If you sorted your rows by time (using an Order-Preserving Partitioner) then you are probably even more likely to create hotspots, since you will be adding rows sequentially and a single node will be responsible for each range of the keyspace.
Columns and Keys can be of any type, since the row key is just the first column.
Virtually, the cluster is a circular hash key ring, and keys get hashed by the partitioner to get distributed around the cluster.
Beware of using dates as row keys however, since even the randomization of the default randompartitioner is limited and you could end up cluttering your data.
What's more, if that date is changing, you would have to delete the previous row since you can only do inserts in C*.
Here is what we know :
A slice range is a range of columns in a row with a start value and an end value, this is used mostly for wide rows as columns are ordered. Known column names defined in the CF are indexed however so they can be retrieved specifying names.
A key slice, is a key associated with the sliced column range as returned by Cassandra
The equivalent of a where clause uses secondary indexes, you may use inequality operators there, however there must be at least ONE equals clause in your statement (also see https://issues.apache.org/jira/browse/CASSANDRA-1599).
Using a key range is ineffective with a Random Partitionner as the MD5 hash of your key doesn't keep lexical ordering.
What you want to use is a Column Family based index using a Wide Row :
CompositeType(TimeUUID | UserID)
In order for this not to become hot, add a first meaningful key ("shard key") that would split the data accross nodes such as the user type or the region.
Having more data than necessary in Cassandra is not a problem, it's how it is designed, so what you must ask yourself is "what do I need to query" and then design a Column Family for it rather than trying to fit everything in one CF like you'd do in an RDBMS.