Using Marklogic 8 triggers for managed triples - triggers

I have a use case where I want to send a notification every time triples are added or deleted from MarkLogic. The notification should contain those triples and should say whether they were added or deleted.
I didn't find any mention in the MarkLogic triggers guide regarding how it might work with (managed) triples. Is there a way to write a trigger module so that for a modified document (containing the managed triples), compare the new version with the old version to work out what's added and deleted and send a HTTP request containing these changes?
I understand that doc($trgr:uri) will give me the latest state of the document in question - but is there a way to retrieve the previous version, before the change? I'm fairly new to MarkLogic and Xquery so some guidance is much appreciated. Thanks!

I think you can only achieve this in one way:
use pre-commit triggers
use xdmp:eval with isolation for different-transaction to get the original document
Something like:
xquery version "1.0-ml";
import module namespace trgr = "http://marklogic.com/xdmp/triggers" at "/MarkLogic/triggers.xqy";
declare variable $trgr:uri as xs:string external;
xdmp:log("Triggered processing of " || $trgr:uri || ".."),
xdmp:log(xdmp:eval('doc("'||$trgr:uri||'")', (), <options xmlns="xdmp:eval"><isolation>different-transaction</isolation></options>)),
xdmp:log(doc($trgr:uri))
I ran a quick test with a trigger scoped for collection 'test'. I then inserted a doc at /test.xml with contents <test>a</test>, and did NOT add it to collection test yet. I then updated the document with <test>b</test>, and also added it to collection test to activate the trigger. It logged a and b..
This shows how you can get both original and updated document. Determining the difference is a challenge on its own..
HTH!

Many thanks to #grtjn for providing the way to access the pre-change document. For determining the difference between documents I found a way inspired by this blog post. The solution that I found to be working looks like this:
xquery version '1.0-ml';
import module namespace trgr='http://marklogic.com/xdmp/triggers' at '/MarkLogic/triggers.xqy';
declare function local:diff($seq1 as item()*, $seq2 as item()*) as item()* {
let $map1 := map:new($seq1 ! map:entry(fn:string(.), .))
let $map2 := map:new($seq2 ! map:entry(fn:string(.), .))
return map:keys($map1 - $map2) ! map:get($map1,.)
};
declare variable $trgr:uri as xs:string external;
declare variable $after := doc($trgr:uri)/sem:triples/sem:triple;
declare variable $before := xdmp:eval('doc("'||$trgr:uri||'")', (),
<options xmlns="xdmp:eval"><isolation>different-transaction</isolation></options>)/sem:triples/sem:triple;
declare variable $added_triples := local:diff($after, $before);
declare variable $added_graph := xdmp:document-get-collections($trgr:uri);
declare variable $deleted_triples := local:diff($before, $after);
declare variable $deleted_graph := xdmp:eval('xdmp:document-get-collections("'||$trgr:uri||'")', (),
<options xmlns="xdmp:eval"><isolation>different-transaction</isolation></options>);
xdmp:log(fn:concat('***** Trigger processing: ', $trgr:uri, '*****')),
xdmp:log('***** added triples *****'),
xdmp:log($added_graph),
xdmp:log($added_triples),
xdmp:log('***** deleted triples *****'),
xdmp:log($deleted_graph),
xdmp:log($deleted_triples)
I created 3 pre-commit triggers, one for each of the trgr:document-content options: create, modify and delete, all invoking the above module. A SPARQL Update query will cause the above module to trigger one or more times, printing the lists of triples which were added and deleted.
Couple of observations:
A single SPARQL Update statement can create, modify and delete multiple documents, so will trigger the module multiple times.
INSERT statements seem to always create new documents, so you'll will never get added triples and deleted triples in the same invocation.
The code assumes there's only one collection for a document, which is the named graph for managed triples. It will need extra work if there are to be multiple collections per document.

Related

Access object on another form with the form as a variable

I'm writing a program in Delphi which includes creating the same dynamic object on multiple forms (never simultaneously), and then a procedure in another unit writes certain text to it.
How the object (TMemo) is created:
memHulp := TMemo.Create(frmHome);
with memHulp do
begin
Parent := frmHome;
Top := 208;
Left := 88;
Height := 98;
Width := 209;
ReadOnly := True;
end;
The properties aren't that important, it's just to show the creation of the object and how it is referred to.
Now, I need to read certain text into the memo from a text file, which there is no problem with, but the problem comes when there are different forms involved that all use that same self-defined procedure.
It's easy to say frmHome.memHulp.Lines.Add() in this particular case, but when I need it to display the text on the memo named exactly the same in all cases, but on a different form, I'm having some trouble.
The frmHome part needs to be a variable. So I tried this:
var
Form: TForm;
begin
Form := Application.FindComponent('frmHome') as TForm;
end;
That doesn't warn me or give an error, but as soon as I try to say Form.memHulp.Lines.Add(), it does not work, and I understand that it probably doesn't have any properties for Form, but how do I make it look at the correct place? I need to be able to tell the program to look on whichever form name I pass as a parameter into the FindComponent() part.
If this is completely not possible, please suggest other solutions to achieve the same.
Form.memHulp doesn't work because Form is a plain vanilla TForm pointer, and TForm doesn't have a memHulp member. You could use Form.FindComponent('memHulp') instead, since you are assigning the TForm object as the Memo's Owner, but that would require you to assign a Name to the Memo, eg:
memHulp := TMemo.Create(frmHome);
with memHulp do
begin
Parent := frmHome;
Name := 'memHulp';
...
end;
Alternatively, since you say you are creating only 1 Memo object at a time, you could simply make memHulp be a global variable in some unit's interface section, and then you would have direct access to it without having to hunt for it.

ORDS 18.4 Why am I getting an empty :body_text (CLOB)?

Tell me, please, why does the empty value come?
To send a request, I use SoapUI 5.5.
But :body is not null.
Do I need to do something in the settings of ORDS?
DECLARE
--b_body BLOB := :body;
c_body CLOB := :body_text;
BEGIN
if :body_text is null then
htp.print('EMPTY');
end if;
END;
As it says in documentation https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/18.3/aelig/implicit-parameters.html#GUID-76A23568-EA67-4375-A4AA-880E1D160D27, for each implicit parameter :body and :body_text "if it is dereferenced more than once, then the second and subsequent dereferences will appear to be empty."
So, change your code like this:
DECLARE
--b_body BLOB := :body;
c_body CLOB := :body_text;
BEGIN
if c_body is null then
htp.print('EMPTY');
end if;
END;
If I remember correct it's not a good idea to use both binds in 1 code block...
If ORDS checks that you're using :body, :body_text is not populated (I think because of the overall performance of converting a blob to clob).
So just use :body_text and you should be fine!
This symptom may result from creating RESTful Services via older versions of the APEX SQL Workshop interface. APEX 5.1 certainly exhibits this behaviour, possibly others. If you are unable to upgrade APEX, use SQL Developer to create your ORDS modules.

Access locally scoped variables from within a string using parse or value (KDB / Q)

The following lines of Q code all throw an error, because when the statement "local" is parsed, the local variable is not in the correct scope.
{local:1; value "local"}[]
{[local]; value "local"}[1]
{local:1; eval parse "local"}[]
{[local]; eval parse "local"}[1]
Is there a way to reach the local variable from inside the parsed string?
Note: This is a simplification of the actual problem I'm grappling with, which is to write a function that executes a query, accepting a list of columns which it should return. I imagine the finished product looking something like this:
getData:{[requiredColumns, condition]
value "select ",(", " sv string[requiredColumns])," from myTable where someCol=condition"
}
The condition parameter in this query is the one that isn’t recognised and I do realise I could append it’s value rather than reference it inside a string, but the real query uses lots of local variables including tables etc, so it’s not as easy as just pulling all the variables out of the string before calling value on it.
I'm new to KDB and Q, so if anyone has a better way to achieve the same effect I'm happy to be schooled on the proper way to achieve this outcome in Q. Would still be interested to know in the variable access thing is possible though.
In the first example, you are right that local is not within the correct scope, as value is looking for the global variable local.
One way to get around this is to use a namespace, which will define the variable globally, but can only be accessed by calling that namespace. In the modified example below I have defined local in the .ns namespace
{.ns.local:1; value ".ns.local"}[]
For the problem you are facing with selecting, if requiredColumns is a symbol list of columns you can just use the take operator # to select them.
getData:{[requiredColumns] requiredColumns#myTable}
For more advanced queries using variables you may have to use functional select form, explained here. This will allow you to include variables in the where and by clause of the select statement
The same example in functional form would be (no by clause, only select and where):
getData:{[requiredColumns;condition] requiredColumns:(), requiredColumns;
?[myTable;enlist (=;`someCol;condition);0b;requiredColumns!requiredColumns]}
The first line ensures that requiredColumns is a list even if the user enters a single column name
value will look for a variable in the global scope that's why you are getting an error. You can directly use local variables like you are doing that in your function.
Your function is mostly correct, just need a slight correction to append condition(I have mentioned that below). However, a better approach would be to use functional select in this case.
Using functional select:
q) t:([]id:`a`b; val:3 4)
q) gd: {?[`t;enlist (=;`val;y);0b;((),x)!(),x]}
q) gd[`id;3] / for single column
Output:
id
-
1
q) gd[`id`val;3] / for multiple columns
In case your condition column is of type symbol, then enlist your condition value like:
q) gd: {?[`t;enlist (=;`id;y);0b;((),x)!(),x]}
q) gd[`id;enlist `a]
You can use parse to get a functional form of qsql queries:
q) parse " select id,val from t where id=`a"
?
`t
,,(=;`id;,`a)
0b
`id`val!`id`val
Using String concat(your function):
q)getData:{[requiredColumns;condition] value "select ",(", " sv string[requiredColumns])," from t where id=", .Q.s1 condition}
q) getData[enlist `id;`a] / for single column
q) getData[`id`val;`a] / for multi columns

QuickBase Perl API: Not able to edit a Record

I am trying to update a Quickbase record via my Perl script. I am following the Perl API documentation: http://metacpan.org/pod/HTTP::QuickBase
The method used for editing a record is "EditRecord". As per this method, you cannot edit built-in fields which is true.
and I know that I am not modifying built-in field but an user-created field.
e.g. I want to modify the field called "OS" to "Windows"
So per the Perl modules CPAN documentation mentioned above, I do this:
my %new_record=$qb_obj->GetRecord($database_id, $record_id);
$new_record{"OS"}="Windows";
$qb_obj->EditRecord($database_id, $record_id, %new_record);
But I get following error:
The field named "Date Created" with field id 1 cannot be modified
Which basically means that I ma trying to modify the field "Date Created" with Field ID "1". However, I am not doing that. It might be pulling that parameter some how. THe perl as well as the Quickbase documentation is not helping much.
Here is the Quickbase API documentation: http://www.quickbase.com/api-guide/edit_record.html#Overview
Can someone help me on this.
thanks.
Since you already know the id of the record, you don't need to read the record before modifying it. You should be able to just remove your first line, create the %new_record without reading it from QB, then your 2nd and 3rd lines should work fine.
The alternative is to remove the built-in QB fields from %new_record before doing the EditRecord.

Can the Sequence of RecordSets in a Multiple RecordSet ADO.Net resultset be determined, controlled?

I am using code similar to this Support / KB article to return multiple recordsets to my C# program.
But I don't want C# code to be dependant on the physical sequence of the recordsets returned, in order to do it's job.
So my question is, "Is there a way to determine which set of records from a multiplerecordset resultset am I currently processing?"
I know I could probably decipher this indirectly by looking for a unique column name or something per resultset, but I think/hope there is a better way.
P.S. I am using Visual Studio 2008 Pro & SQL Server 2008 Express Edition.
No, because the SqlDataReader is forward only. As far as I know, the best you can do is open the reader with KeyInfo and inspect the schema data table created with the reader's GetSchemaTable method (or just inspect the fields, which is easier, but less reliable).
I spent a couple of days on this. I ended up just living with the physical order dependency. I heavily commented both the code method and the stored procedure with !!!IMPORTANT!!!, and included an #If...#End If to output the result sets when needed to validate the stored procedure output.
The following code snippet may help you.
Helpful Code
Dim fContainsNextResult As Boolean
Dim oReader As DbDataReader = Nothing
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#If DEBUG_ignore Then
'load method of data table internally advances to the next result set
'therefore, must check to see if reader is closed instead of calling next result
Do
Dim oTable As New DataTable("Table")
oTable.Load(oReader)
oTable.WriteXml("C:\" + Environment.TickCount.ToString + ".xml")
oTable.Dispose()
Loop While oReader.IsClosed = False
'must re-open the connection
Me.SelectCommand.Connection.Open()
'reload data reader
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#End If
Do
Dim oSchemaTable As DataTable = oReader.GetSchemaTable
'!!!IMPORTANT!!! PopulateTable expects the result sets in a specific order
' Therefore, if you suddenly start getting exceptions that only a novice would make
' the stored procedure has been changed!
PopulateTable(oReader, oDatabaseTable, _includeHiddenFields)
fContainsNextResult = oReader.NextResult
Loop While fContainsNextResult
Because you're explicitly stating in which order to execute the SQL statements the results will appear in that same order. In any case if you want to programmatically determine which recordset you're processing you still have to identify some columns in the result.