Actual use case scenario of Drools in Talend - drools

Can anyone explain me what is actual use case of Drools in Talend. I am trying to learn Drool as I use Talend for Data management. But every tutorial or instruction set I find on internet are just copy of official Drool documentation.
I want to see drool in action (say, example based on Employee master and so on). The reason is that though, I am able to create basic conditions in drools, I can't figure out how to actually implement it.
Can anybody help me with it?

Yes you can use drools rule in Talend.
For Talend Open studio you have to write java code to integrate Drools in Talend.
If you are using enterprise version then go to the repository>>metatadata>>rules
there you can create rules and then it can be used in Talend Job flow.
For more information read about tRule component.

Related

IBM DataConnect refine operations

The supported list of transformations in IBM's ETL service DataConnect in Bluemix Cloud are these ones here: https://console.ng.bluemix.net/docs/services/dataworks1/using_operations.html#concept_h4k_5tf_xw
I have looked and looked but with no luck, what if I want to transform some of my data with an operation that is not included here? For example run custom code in a column and get some specific output?
Data Connect does not currently support refine operations outside of those provided with the service. We are adding new features and functionality weekly, but if you have a specific operation in mind, please let us know.
I will find out for you if we have the ability to execute custom code on our roadmap.
Regards,
Wesley - IBM Bluemix Data Connect Engineering
As Wes mentions above in the short term we will continue to add new data preparation and transformation capabilities to the service. Currently there is no extensibility that allows you to code new transformations.
In the longer term we are considering allowing users to edit/extend pipelines using languages like Scala and Python. We don't have a defined date for these new capabilities.
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager

How to extract data from web api with Talend Open Studio

How can I extract data using Talend from websites such as below to do some data analysis:
Airbnb,
change.org
monster.com
ebay
I am new to TOS and not familiar with internet components. I think I may be confused regarding what connectors to use (trest, tsoap...). If anyone could help me understand which kind of connectors are needed that would be great.
You can use following architecture
tREST --> tExtractJSON or tExtractXMLFields component depending on your requirement.

How to submit Hive Jobs programmatically from JSP

We are trying to build a wrapper system for business users and we want to explore option of building a capability to submit the HIVE query from a JSP page. I could not find a best example or suggested mechanism for this. Anyone tried this before? If so can someone share their best ideas? We are looking for the REST API mechanism. If that wont work, then we can use java from JSP servlets.
Appreciate your support.
Kiran
You can use JDBC. I dont think there is a REST API for Hive.
But since most developers & application typically use JDBC this should be the preferred mode.
More details can be found here (Assuming you are using latest Hive versions) : Hive 2 Clients
Sample code sample code

How does processmaker engine work?

After I finish the design of the process in the bpmn notation..processmaker transform the bpmn to xpdl to execute this process? or use bpel?
I've used ProcessMaker for 3 years, and it seems to me it doesn't use BPEL.
Check this: http://wiki.processmaker.com/index.php/ProcessMaker_Architecture_Diagrams
It doesn't mention anything about it BPEL or XPDL.
To execute the process, ProcessMaker generates code files and XML files, which contain the business logic you designed before using DynaForms.
So, it's not just designing the process using BPMN notation, you have to build data entry forms, derivation rules, create user groups, give them permission and even some custom programming.
This is not "magic".
The current version of ProcessMaker 2.5.0 is not BPMN or BPEL compliant. But the Roadmap of the product includes the BPMN compliant implementation (http://wiki.processmaker.com/index.php/ProcessMaker_RoadMap).
Currently the engine uses tasks, events, steps, dynaforms, input and output documents and triggers to execute processes.
Current version of processmaker has not a BPEL or BPMN engine. But processmaker can execute processes because have an engine. To execute a case you need to go to the inbox tab and start a new case, of course you need to configure user access at the designing time.
I don't know anything about XPDEL or BPEL but based on my experience, processmaker will store everything on their workspace database, that's why they use PMT_ prefixes if you create report table, for separate user created table and processmaker system tables. If you create case, processmaker will create CASE in table APP_DELEGATION with process, task, application (cases), user and anything related to your CASE.
So basically they will serve form based on APP_DELEGATION data, this table also stored every steps of CASES. If your submit your form, they will make a new row in APP_DELEGATION with sampe process and application but new TASK (TAS_UID) related to designer path(arrow on your screen).
Basically they just store information, serve it based on information, and route it based on your design. Even your uploaded file will be noted on processmaker databases system (they will create UID and other important information, even the uploader user information). And not compiling or translate it to another language. Simple but not so simple as that.
ProcessMaker's latest version (released in January 2020) - ProcessMaker 4.x- is fully BPMN 2.0 compliant. You can important and export BPMN 2.0 files from other BPMN 2.0 compliant designers into ProcessMaker.
BPEL is really no longer used by anyone in the industry. It lost support a long time ago.
In summary, ProcessMaker 4 requirements for a server can be seen at this link.
ProcessMaker still uses the stack for installation: apache or nginx, mysql database and php language. Aditionally, Lavaravel framework is used in ProcessMaker. ProcessMaker as a bmnp software needs complies with the BPMN 2.0 standards.

JasperReports and custom data sources

I'm looking at embedding JasperReports into an existing web app for reporting. The webapp sits on top of an existing database which is ancient and complex, and really not suitable for report writers to use to write reports against directly.
What I want to look at is writing some kind of wrapper around our existing data access layer (written to make our life easier talking to aforementioned ancient and complex db). Does anyone have any experience of writing custom data sources for JasperResports, or of doing anything like this?
Updated
I guess I probably wasn't clear in my question - which is probably because my requirements aren't clear either. I want to provide some way that the end-users can use something like iReport to author reports against the database, and then to use JasperReportServer for scheduling/viewing of the reports. However, the database is really, really nasty and was never designed for use in this way. We've got a access layer around it that the webapp uses to talk to it. I want to keep my end users away from the DB altogether, and the idea of a custom data source that used the access layer seemed a good option. However, I've found very little documentation on how to do that. Maybe it's just a whole lot easier than I think it is, and I'm just trying to make a dead simple thing too complicated.
Updated
Thanks for the answers. I don't think my problem has been solved, but I think the answers have helped to inform the requirements phase.
Jasper reports allows you to use a "JavaBean" data source. You can load your data into any Java Bean structure and build the reports against that. Works well.
See the "Custom Data Source" section here.
Every JasperReports template can have two different data sources. One is hooking it directly to a database using some jdbc driver or, in your case, providing a collection of java beans (POJO's), usually list.
JasperReports template is similar to a method definition. It has a name, i.e. compiled JR object and parameters (data source and a list of input parameters of some of the most popular Java types).
My suggestion is to use iReport tool. Open some example that comes with the JasperReports bundle, analyze it and tweak it. It's not so complicated.
UPDATE
Letting customers authoring JasperReports templates, compiling and adding to the classpath means that you'll need to open your system too much. Usually clients provide description of a desired report and developer(s) create the data source and design the template. JasperReports can have parameters. If these parameters are exposed through UI users can change the behavior of reports in the runtime.
If you really need to allow more flexibility then use the API provided by JasperReports for authoring templates. I could imagine some simple DLS for advanced users to communicate with your system creating on-fly reports.