Retrieve the decision table in readable format from ODM Decision Center using Java code/API - jrules

I need to extract the contents of a decision table in a readable/tabular format from the IBM ODM Decision Center Business Console using Java and ODM APIs provided.
This has to be an automated process and hence I cannot use the existing Import/Export feature provided by ODM.
Does anyone have any sample code or have implemented such a requirement?
Version: ODM 8.10.1

Step 1: Query the decision center for the decision table (see here for instructions)
Step 2:
if (rule.getType().equals("brm.DecisionTable")) {
IlrDecisionTable dTable = (IlrDecisionTable) rule;
IlrDTController controller = IlrSessionHelper.getDTController(session, dTable, new Locale(dTable.getLocale()));
String tableHtml = IlrDTHelper.getHTMLTable(controller, false, false);//getHTMLTable(IlrDTController controller, boolean showEmptyRow, boolean includeCSS)
...
}

if you synchronise your Rule Designer rules with Decision Center, you can use REST API to extract your Decision Table as an excel file from Decision Center.
https://www.ibm.com/docs/en/odm/8.10?topic=api-rest
Hope this helps,
Emmanuel

Related

How to query domain in jaspersoft with Dynamic Parameters

I am new to jaspersoft reporting. I am currently designing and developing reports by considering following requirements.
I want to create template based reports where all dynamic parameters I need to pass in SQL query.
I was going through japsersoft reporting I found that we can create join views and cache data by creating domains. So that it reduces hits at db level.
While creating report I found that I cant execute SQL script on Domain objects.
Please advice whether I am on right track or not.
Basically I want to query on cached data such as domain view instead of hitting DB directly.
Please suggest if any workaround is available for this problem.
Please note, although JasperReports Server manages a cache for Ad Hoc Views and Ad Hoc Reports running on Domains, running a JRXML report (e.g. designed in Jaspersoft Studio) on a Domain does not guarantee hitting that cache.
You also have the option of using a layer that provides caching between JasperReports Server and your database. For example, support has been recently added for TIBCO Data Virtualization (not a free product) in v.7, see https://www.jaspersoft.com/introducing-jaspersoft-7.
In any case, Domains are not relational databases and therefore do not support straight SQL.
You can use the "Domain query language" though, which offers a subset of the features of SQL. The easiest way to write a query is using Jaspersoft Studio and selecting "domain" in the Language dropdown (top-left corner of the Dataset and Query Dialog, indicated by the red arrow in the screenshot below from Studio 6.4.0):
For example the design above (which uses the Supermart Domain, provided with the sample data) will generate this query and the required "dynamic" parameter as you requested – in this case a Collection as the filter is 'Is One Of' which can take multiple values:
<query>
<queryFields>
<queryField id="sales_fact_ALL.sales__product.sales__product__product_name"/>
<queryField id="sales_fact_ALL.sales_fact_ALL__store_sales_2013"/>
</queryFields>
<queryFilterString>sales_fact_ALL.sales__store.sales__store__region.sales__store__region__sales_country in sales__store__region__sales_country_0</queryFilterString>
</query>
See here for another example of a query (current version of docs based on 7.1.0 release), in this case for use with the REST API: https://community.jaspersoft.com/documentation/tibco-jasperreports-server-rest-api-reference/v710/queryexecutor-service
The queryFilterString tag follows the DomEL syntax as documented here (also for 7.1.0): https://community.jaspersoft.com/documentation/tibco-jasperreports-server-user-guide/v71/domel-syntax

How to control data failures in Azure Data Factory Pipelines?

I receive an error from time and time due to incompatible data in my source data set compared to my target data set. I would like to control the action that the pipeline determines based on error types, maybe output or drop those particulate rows, yet completing everything else. Is that possible? Furthermore, is it possible to get a hold of the actual failing line(s) from Data Factory without accessing and searching in the actual source data set in some simple way?
Copy activity encountered a user error at Sink side: ErrorCode=UserErrorInvalidDataValue,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column 'Timestamp' contains an invalid value '11667'. Cannot convert '11667' to type 'DateTimeOffset'.,Source=Microsoft.DataTransfer.Common,''Type=System.FormatException,Message=String was not recognized as a valid DateTime.,Source=mscorlib,'.
Thanks
I think you've hit a fairly common problem and limitation within ADF. Although the datasets you define with your JSON allow ADF to understand the structure of the data, that is all, just the structure, the orchestration tool can't do anything to transform or manipulate the data as part of the activity processing.
To answer your question directly, it's certainly possible. But you need to break out the C# and use ADF's extensibility functionality to deal with your bad rows before passing it to the final destination.
I suggest you expand your data factory to include a custom activity where you can build some lower level cleaning processes to divert the bad rows as described.
This is an approach we often take as not all data is perfect (I wish) and ETL or ELT doesn't work. I prefer the acronym ECLT. Where the 'C' stands for clean. Or cleanse, prepare etc. This certainly applies to ADF because this service doesn't have its own compute or SSIS style data flow engine.
So...
In terms of how to do this. First I recommend you check out this blog post on creating ADF custom activities. Link:
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Then within your C# class inherited from IDotNetActivity do something like the below.
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
//etc
using (StreamReader vReader = new StreamReader(YourSource))
{
using (StreamWriter vWriter = new StreamWriter(YourDestination))
{
while (!vReader.EndOfStream)
{
//data transform logic, if bad row etc
}
}
}
}
You get the idea. Build your own SSIS data flow!
Then write out your clean row as an output dataset, which can be the input for your next ADF activity. Either with multiple pipelines, or as chained activities within a single pipeline.
This is the only way you will get ADF to deal with your bad data in the current service offerings.
Hope this helps

Need pointers on how report generation can happen in CQ5

We have created a set of forms in CQ5 and we have a requirement that the content of these forms should be stored at a specific node, our forms interact with third party services and get some data from there as well, this is also stored on the same nodes.
Now, we have to give authors the permission to go and download these reports based on ACLs. I also will have to provide them start and end date and upon selecting these dates the content placed in these nodes should be exportable in CSV format.
Can anybody guide me in how to achieve this functionality. I have gone through report generation but need better clarity on how this can be achieved like how will i be able to use QueryBuilder api/ how can i export and how do i provide the dates on the UI.
This was achieved as described.
I actually had to override the default report generation mechanism and i created my own custom report using report generation tutorial in cq documentation.
Once the report templates and components were written, i also override cq report page component and provided input dates in body.jsp using date component of granite.
once users selected dates, with the help of querybuilder api i used to search for nodes at path(specified by author, can be different for different form data) and i also created an artificial resource type at nodes where i was storing the data, this lead me to exact nodes where data was stored and this property was also passed to querybuilder. The json returned as response from querybuilder was then supplied to a JS which converted the data to csv format.

XPages Dojo Data Grid and Custom REST Service

Can a custom REST service be used as a data source for a dojo data grid? I am needing to combine data from three different databases into one data grid. The column data will need to be sort-able. The response from the REST service looks to be correct. I have having trouble with binding the JSON data to the dojo grid columns.
Very interesting -- I tested and saw the same thing with a custom REST service -- it doesn't work when referenced as the storeComponentId of the grid.
I got it to work with the following steps:
Include two dojo modules in the page resources to set up the data store
A pass-thru script tag with code to set up a JSON data store for the grid (uses the dojo modules that the resources specify)
The grid’s store property is set to the variable set up for the data source in the tag. (storeComponentId needs an XPages component name)
Here are some snippets that show the changes:
<xp:this.resources>
<xp:dojoModule name="dojo.store.JsonRest"></xp:dojoModule>
<xp:dojoModule name="dojo.data.ObjectStore"></xp:dojoModule>
</xp:this.resources>
...
<xe:restService id="restService1" pathInfo="gridData">
...
<script>
var jsonStore = new dojo.store.JsonRest(
{target:"CURRENT_PAGE_NAME_HERE.xsp/gridData"}
);
var dataStore = dojo.data.ObjectStore({objectStore: jsonStore});
</script>
...
<xe:djxDataGrid id="djxDataGrid1" store="dataStore">
There's more information and a full sample here:
http://xcellerant.net/dojo-data-grid-33-reading-custom-rest-service/
The easiest way is to start with the extension library. There's a sample for a custom JSON-Rest service. While it pulls data from one source, it is easy to extend to pull data from more than one. I strongly suggest you watch out for all over performance.
What I would do:
create a bean that spits out the JSON to the grid
test it with one database
learn about threads in XPages and here
use one thread each for the databases, cuts down your load time
use a ConcurrentSkipListMap with a comparator so you have the initial JSON in the sort order most useful to the user (or the one from the preferences or the last run)
Memento bene: the Java Collections Framework is your friend (a sometimes difficult one).
Let us know how it goes!

SharePoint SOAP service GetListItem with respect to updated date?

Right now I am working on a project to fetch data from a SharePoint list using SOAP API. I tried and successfully fetches the complete list, but now I want to fetch some specific data that is updated after a specific date.
Is this possible to fetch such data using SOAP query. I can see last update filed when I view single item at the bottom. Is this some how possible to use that filed?
Yes you can use the Web Services to do lot of things just like filtering a list result. I don't know which language you use, but with JavaScript you can look at these two frameworks that should help you:
http://aymkdn.github.io/SharepointPlus/ : easy way to create your queries (I created it)
http://spservices.codeplex.com/ : the most popular framework but less easy to use (it's my point of view)
You can also look at the documentation on MSDN (the param to use is query): http://msdn.microsoft.com/en-us/library/lists.lists.getlistitems.aspx
At last found the answer,
The last update date and time can be retrieved from the list column "Modified".
The soap response will have the value in the attribute "ows_Modified".
Muhammad Usman