XPages Dojo Data Grid and Custom REST Service - rest

Can a custom REST service be used as a data source for a dojo data grid? I am needing to combine data from three different databases into one data grid. The column data will need to be sort-able. The response from the REST service looks to be correct. I have having trouble with binding the JSON data to the dojo grid columns.

Very interesting -- I tested and saw the same thing with a custom REST service -- it doesn't work when referenced as the storeComponentId of the grid.
I got it to work with the following steps:
Include two dojo modules in the page resources to set up the data store
A pass-thru script tag with code to set up a JSON data store for the grid (uses the dojo modules that the resources specify)
The grid’s store property is set to the variable set up for the data source in the tag. (storeComponentId needs an XPages component name)
Here are some snippets that show the changes:
<xp:this.resources>
<xp:dojoModule name="dojo.store.JsonRest"></xp:dojoModule>
<xp:dojoModule name="dojo.data.ObjectStore"></xp:dojoModule>
</xp:this.resources>
...
<xe:restService id="restService1" pathInfo="gridData">
...
<script>
var jsonStore = new dojo.store.JsonRest(
{target:"CURRENT_PAGE_NAME_HERE.xsp/gridData"}
);
var dataStore = dojo.data.ObjectStore({objectStore: jsonStore});
</script>
...
<xe:djxDataGrid id="djxDataGrid1" store="dataStore">
There's more information and a full sample here:
http://xcellerant.net/dojo-data-grid-33-reading-custom-rest-service/

The easiest way is to start with the extension library. There's a sample for a custom JSON-Rest service. While it pulls data from one source, it is easy to extend to pull data from more than one. I strongly suggest you watch out for all over performance.
What I would do:
create a bean that spits out the JSON to the grid
test it with one database
learn about threads in XPages and here
use one thread each for the databases, cuts down your load time
use a ConcurrentSkipListMap with a comparator so you have the initial JSON in the sort order most useful to the user (or the one from the preferences or the last run)
Memento bene: the Java Collections Framework is your friend (a sometimes difficult one).
Let us know how it goes!

Related

Automatically map contents of REST JSON body as flat table in Data Flow

With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern

Example of using a service singleton as data source in Angular 2

I need a good example of using a service singleton as data source for my Angular 2 application.
Scenario is as following:
I have an application that is loading prices of some items from the local database (in my case MongoDB).
A few of the components need to use a service which will be the universal source of truth for item prices throughout the application. These prices can be acted upon externally: user can change currency, so they have to be recalculated, or can change the date range for which price averages will be calculated.
So I need to have a singleton service which will load upon app initialization and components need to load prices only after the service data store has been initialized with prices. Also, components need to refresh data(I guess using Observable pattern) when, say, the currency or date range has been changed. Perhaps the best way is to inject the service in the app component, so it gets initialized first?
Is there a recipe or proposed architecture for this kind of app?
I can't call some init function from each component with ngOnInit() because I want data available in multiple components. I need to know in each component when to initialize it with data from Service's data store. I need to know when data is ready.
The way I did it in Angular 1.x is to instantiate the service, and in constructor initialize data, and then when the data is initialized, emit a $rootScope event to tell all components that data is ready.
I can't find a proper recipe to do the same thing in Angular 2.
You need to create a service and define it when bootstrapping your application:
bootstrap(App, [ SingletonService ]);
This way you will have a single instance for the whole application.
If you want to initialize things, you can use it constructor. To notify other elements that use the service, you can use one or several properties of EventEmitter. This way you will be able to emit events when data are there or when something changes. Components could subscribe on these EventEmitters to be notify...

Using visjs manipulation to create workflow dependencies

We are currently using visjs version 3 to map the dependencies of our custom built workflow engine. This has been WONDERFUL because it helps us to visualize the flow and find invalid or missing dependencies. What we want to do next is simplify the process of building the dependencies using the visjs manipulation feature. The idea would be that we would display a large group of nodes and allow the user to order them correctly. We then want to be able to submit that json structure back to the server for processing.
Would this be possible?
Yes, this is possible.
Vis.js dispatches various events that relate to user interactions with graph (e.g. manipulations, or position changes) for which you can add handlers that modify or store the data on change. If you use DataSets to store nodes and edges in your network, you can always use the DataSets' get() function to retrieve all elements in you handler in JSON format. Then in your handler, just use an ajax request to transmit the JSON to your server to store the entire graph in your DB or by saving the JSON as a file.
The oppposite for loading the graph: simply query the JSON from your server and inject it into the node and edge DataSets' using the set method.
You can also store the networks current options using the network's getOptions method, which returns all applied options as json.

Need pointers on how report generation can happen in CQ5

We have created a set of forms in CQ5 and we have a requirement that the content of these forms should be stored at a specific node, our forms interact with third party services and get some data from there as well, this is also stored on the same nodes.
Now, we have to give authors the permission to go and download these reports based on ACLs. I also will have to provide them start and end date and upon selecting these dates the content placed in these nodes should be exportable in CSV format.
Can anybody guide me in how to achieve this functionality. I have gone through report generation but need better clarity on how this can be achieved like how will i be able to use QueryBuilder api/ how can i export and how do i provide the dates on the UI.
This was achieved as described.
I actually had to override the default report generation mechanism and i created my own custom report using report generation tutorial in cq documentation.
Once the report templates and components were written, i also override cq report page component and provided input dates in body.jsp using date component of granite.
once users selected dates, with the help of querybuilder api i used to search for nodes at path(specified by author, can be different for different form data) and i also created an artificial resource type at nodes where i was storing the data, this lead me to exact nodes where data was stored and this property was also passed to querybuilder. The json returned as response from querybuilder was then supplied to a JS which converted the data to csv format.

How to store data of complex types for use in Javascript?

This is a reoccurring theme: either AJAX causes data to be requested from the server and then the data is shown to the user or the page is pre-populated (ie. rendered) with data that has some kind of interaction with Javascript.
Here are couple of possibilities I see how to store and manipulate the (complex) data:
Store the data simply in the DOM and use Javascript to fetch-from-DOM, do something with the data and manipulate the DOM accordingly to show the results. Fetching and storing the data (multiple hidden fields) can be a real mess and multiple selectors can be slow compared to getting the data from Javascript. In the pre-populate scenarios the benefits are that the data is indexable/crawable by search engine spiders and the page works without Javascript enabled (it gracefully degrades).
Use Javascript as a container for holding the data and use DOM only to visualize the data. When some action is triggered, the only things fetched from the DOM is something like the entity's ID so we know to which object we are referring to in Javascript. But how is the pre-populated data pushed to Javascript? If we simply render them to DOM, we are back to fetch-from-DOM scenario. We could render the JSON string to the page's <script> tags and then populate the DOM (but this might have problems with caching?). Or we could request the data lazily using AJAX (but this causes unnecessary server load).
Use a ready made container, like some jQuery table plugin (jqGrid). But it's not possible always to use such a plugin because great deal of customization is needed or the component is simply an overkill for your scenario.
Also, do you tend to render as much as possible on the server (using RenderPartial) side and return possibly both, the data along with their rendered HTML/Javascript?
I've tried searching for articles about this topic without much of success. Any direction, advice and pointers are welcome.
check out extjs data stores like JSONStore
My (current) approach is to keep the JavaScript as minimal as possible. Obviously, it is also appropriate to have only one place that handles the layout of content. Thus, I tend to have the layout rendered as it needs to be for the given place it is to be used. So yes, the Render method will then render everything appropriately (not only raw data, i.e. some JSON string or what-have-you). This, for me, has advantages as it is then generally trivial to implement an 'non-JavaScript' scenario, where the content is not loaded via Ajax but simply as embedded content on some other page.
However, this is just my approach. It's not neccessarily the best way, and may be subject to change. I hope it gives some indication into the decision making process than can be used to arrive at a specific model.
Javascript by default is an incredibly powerful dynamic language. Storing complex object as variables in Javascript in not an issue. Take for example:
var o = [new Object({ name: 'Test1', value: 1 }),
new Object({ name: 'Test2', value: 2 })];
alert(o[0].name) // Returns Test1 as a string
Adding to or removing objects from the array by Ajax calls returning JSON objects is one relatively easy way. I don't see a reason to store more then just an ID in the DOM to lookup values in Javascript, regardless of how fast a DOM engine is in a browser it more elegant to store them in Javascript.
you can also use jQuery.data
jQuery.data( element, key, value )
element : The DOM element to associate with the data.
key : A string naming the piece of data to set.
value : The new data value.