How do I create a autoNumbered predicate in LogiQL? - logicblox

I'm wondering how to work with autoNumbered refmode predicates in LogicBlox / LogiQL
I followed the example in the manual but then am having trouble asserting facts into the entity predicate, the first one would be added but subsequent attempts would not be.
Here is what I tried to do in the LB interactive shell:
lb> create wibble
created workspace 'wibble'
lb wibble> addblock '
>auto(x), auto_id(x:id) -> int(id).
>lang:autoNumbered(`auto_id).
>cons_auto[] = x -> auto(x).
>lang:constructor(`cons_auto).'
added block 'block_1Z2ZWC0N'
lb wibble> exec '+auto(x), +cons_auto[] = x.'
lb wibble> popcount auto
1: auto
lb wibble> exec '+auto(x), +cons_auto[] = x.'
lb wibble> popcount auto
1: auto

The issue here is the constructor, cons_auto. The way constructors work is that, for every unique key-tuple to the constructor, a unique entity is created, regardless how many times you assert into the constructor with the same key-tuple
You have defined a constructor with no key. This means exactly one entity will be created using this constructor no matter how many times you execute a delta rule asserting into it.
You can define the constructor a bit differently, with one key for example:
cons_auto_onekey[key] = x -> int(key), auto(x).
lang:constructor(`cons_auto_onekey).
Now you can do:
+cons_auto_onekey[1] = x, +auto(x).
And then,
+cons_auto_onekey[2] = x, +auto(x).
You would see that there would be two auto entities created.
Now of course, I suspect this is not what you want -- because how are you suppose to get the keys? The whole point you made auto an auto-numbered entity is probably because you wanted to automatically generate "references".
This is where transaction:id would be useful. e.g.
+cons_auto_onekey[key] = x, +auto(x) <-
transaction:id[] = key.
Note that, transaction:id is unique per transaction, per workspace. That means in the same transaction, you get only one transaction:id, and if you want to create more than one auto entity in the same transaction, you'll have to do some computation off of transaction:id to get further unique numbers within the transaction.
There's also a uid series function that can also help generate unique id's. But you'll have to have something unique to use to generate using it. I'm not sure that'll help you but let me know if the above doesn't get you far enough, and we can explore whether uid can help.

Related

How can I inject agents in a source block from an uploaded database table?

I am trying to inject agents from a database into a specific source block. The database consists of 2 columns of "OrderType" & "OrderAmount". I wish to inject "OrderAmount" of agents of their corresponding "OrderType" into this source, while retaining the differentiation (I.E. by storing a parameter attribute ID of each entry/agent in the corresponding agenttype of the source block).
I have saved the entries from the database in collections and constructed a table as such (For both arrays; type = double):
double [][] ArrayCustomerOrders = new double [coll_CustomerOrderType.size()][2];
for (int i = 0; i < coll_CustomerOrderType.size(); i++) {
ArrayCustomerOrders[i][0] = coll_CustomerOrderType.get(i);
ArrayCustomerOrders[i][1] = coll_CustomerOrderAmount.get(i);
}
I tried playing around with the source block calls of inject() function in the same event as I constructed the order table in, but was unable to inject eligible arguments.
Does anyone have a suggestion on how to go about this?
In your case, you can simply replace the Source block with an Enter block (named "myEnterBlock"):
Create an empty agent population pop_Orders with an agent type that has 2 parameters p_Type and p_Amount.
In your for-loop code, when you have the current order type and amount, create such an agent and push it into the Enter block directly:
myEnterBlock.take(add_pop_Orders(currentType, currentAmount))

How to create element from API with name from appropriate sequence?

I'd like to use EA to generate Requirement elements programatically. I need to use the same sequence numbering (REQ00000xy), as with the GUI when pressing "Auto" button in "Add Element ..." dialog in order to keep´consistent numbering for Requirement elements created either from GUI or from API.
Selecting the last used sequence number from already existing Requierement elements won't help, as it don't move the sequence number up and next Requirement created from GUI .
Is there a way to get (and properly use) the sequence number via EA API or EA SQL?
The table you're looking for is t_trxtypes. This contains something like (EA's output)
Description;NumericWeight;Notes;TRX;TRX_ID;Style;
Autocount;1,00;prefix=bla;suffix=x;active=1;active_a=0;counter=126;;Class;1; ;
You're interested in the column Notes which holds as CSV list like
prefix=bla;suffix=x;active=1;active_a=0;counter=126;
This is a test setting for a class which currently has the number 126. So the next created class would be named bla126x and the entry would change to
prefix=bla;suffix=x;active=1;active_a=0;counter=127;
Just keep the columne t_trxtypes.notes in synch with your creations.
Note EA does not (seem to) allow direct DB access. However, it has a proven back door:
Repository.Execute("UPDATE t_trxtypes SET Notes='prefix=bla;suffix=x;active=1;active_a=0;counter=127;' WHERE TRX_ID=<your id>")
will do the update (replace <your id> with the appropriate key). Though Execute is undocumented it works ever since and for sure Sparx will not limit it as nowadays everyone relies on it.
As a side note: This counter is not safe. There are lots of ways (the easiest is a simple rename) to break it. You'd need some script/add-in to have regular checks your numbering is still consistent. If you rely on requirement numbering you better use an external system like, I dare to say, DOORS.
Finally, RTFM....
For elements, where sequence is defined, if you use empty name in set =AddNew() function, EA generates the sequence upon .Update(). Not earlier. So if you plan to use the generated sequence and add some description, you need to create the element with empty name first, then Update() it and after that append your description to the content of the Name field.
As easy as this.

Assigning a whole DataStructure its nullind array

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1
First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.
for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.
One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';
The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

using Joblets in talend with tMemorize and tJavaFlex

I am trying to create some joblets in Talend that will speed up some processes.
I have an input from a MSSQLInput, the results are then sorted and filtered a little. Then I have a tMemorizeRows and a tJavaFlex, the purpose of this is to memorize the rows in a column to preform a count. The count is based on a customer ID, once the the id changes the count starts back to 1 and the proccess begine again and continues to the end. I have refactored this as a joblet but it does not work, the error is:
ID_tMemorizeRows_1 cannot be resolved to a variable
I have a tJavaFlex which starts with
int counte = 1;
The Main code is
if(ID_tMemorizeRows_1[0].equals(ID_tMemorizeRows_1[1]))
{
counte = counte + 1;
}
else
{
counte = 1;
}
context.Enqnum = counte;
The Enqnum variable and is created correctly and added into a tMaps component.
Does anyone know why this is happening, one person told me it is because when you move something to a joblet it gets a new/different name so it has to be specifically called in the Java, if this is the case how do I find the name out?
Thank you
Rich
I do have a resolution. I have tried to add images however my reputation is not high enough.
When using joblets we know that Talend essentially recycles the code used in the joblet by inserting it into the code for the main job.
This is the joblet I have created, i know it works because I have refactored it to a joblet instead of building it from sctatch. What its doing is simply memorises row 0 and row 1 in an ordered data set, the java performs a count and the tMap appends the result to the job (as Mentioned above).
(I will try it inser image in my question, I do not have enough reputation point to insert it into a question).
When the job is run it runs fine. But problems occur when I want to reuse the same joblet in another part of the job. What Talend does is it assigns names within the source code to each component depending on the name of the joblet.
For example, if the Joblet was called ThisJob, then tMemorizeRows_1 would be called ThisJob_1_tMemorizeRows_1.
The row within the component (in this example ReferenceID) would renamed as:
ReferenceID_ThisJob_1_tMemorizeRows_1.
But when you add a second joblet to your job it gives it a new name, eg ThisJob_2. This name will be different depending on how much you have been altering your job before you add the second joblet. Therefore the number within the name will depend on this activity.
If you add the joblet into your job immediately then the joblet would be called ThisJob_2, if you have added 5 other components before you add it in then the joblet is likely to be called ThisJob_6 etc. (I'm not 100% sure how talend renames components)
When you add a joblet, You can see the name of the joblet on the joblet component, this then reverts back the the original joblet name when you create any links/joins to other components.
Its also important that each component within the code is assigned to a variable called currentComponent.
Resolution
What I did was used the Java code to split the name using the code below. This way I can get the current name of the of the joblet and use this name in my Java.
String string = currentComponent;
String[] parts = string.split("_");
String part1 = parts[0];
String part2 = parts[1];
String joblet = part1+'_'+part2;
String newrow = "ReferenceID_"+joblet+"_tMemorizeRows_1";
I hope this makes sense.
Thanks

Intersystems Cache - Maintaining Object Code to ensure Data is Compliant with Object Definition

I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.