Talend - Stats and Logs - On database - error - talend

I have a job that inserts data from sql server to mysql. I have set the project settings as -
Have checked the check box for - Use statistics(tStatCatcher), Use logs (tLogcatcher), Use volumentrics (tflowmetercatcher)
Have selected 'On Databases'. And put in the table names
(stats_table,logs_table,flowmeter_table) as well. These tables were created before. The schema of these tables were determined using tcreatetable component.
The problem is when I run the job, data is inserted in the stats_table but not in flowmeter_table
My job is as follows
tmssInput -->tmap --> tmysqoutput.
I have not included tstatcatcher,tlogcatcher,tflowmetercatcher. The stats and logs for this job are taken from the project settings.
My question - Why is there no data entered in flowmeter_table? Should I include tStatCatcher , tlogcatcher and tflowmetercatcher explicitly in the job for it to run fine?
I am using TOS
Thanks in advance
Rathi

Using flow meter requires you to manually configure the flows you want to monitor.
On every flow you want to monitor, right-click on the row >parameters>advanced settings>Monitor connection.
Then you should be able to see data in your flow table.
If you are using the project settings , you don't need to add the *Catcher component on your job.

You need to use tstatcatcher,tlogcatcher,tflowmetercatcher composant in the job directly.
The composant have already their schema defined so you jusneed to put a tmap and redirect in the table you want like :
Moreover in order tu use the tlog catcher you need to put some tdie or twarn in your job.

Related

how to collect all information about the current Job in Talend data studio

I'm Running any job then I want to log all information like ---
job name
Source detail and destination details (file name/Table name)
No of records input and number of records processed or save.
so I want log all the above information and insert into Mongodb using talend open studio Components also explain what component do I need to perform that task. need some serious response thanks.
You can use tJava component as below. Get the count of source, destination, details of the source name and target name. Now redirect the details to a file in tJava.
For more about logging functionalities, go through below tutorials,
https://www.youtube.com/watch?v=SSi8BC58v3k&list=PL2eC8CR2B2qfgDaQtUs4Wad5u-70ala35&index=2
I'd consider using log4j which has most of this information. Using MDC you could expand the log messages with custom attributes. Log4j does have a JSON format, and there seems to be a MongoDB appender as well.
It might take a bit more time to configure (I'd suggest adding the dependencies via a routine) but once configured it will require absolutely no configuration in the job. Using log4j you can create filters, etc.

Not Able change parsing schema at run time in oracle apex 5

I am using Oracle apex 5,oracle database 12c
I have successfully configured oracle apex 5 with oracle DB 12c.
I have created authentication scheme using database table,that Authentication scheme worked successfully.
But my requirement is - Each user has to be connect to its own schema
(eg.user1 = HR; user2 = SCOTT)
with in same application.
Shortly, application must run on multiple schemas at run time.
But I am not able to get that,I have tried below stuff -
current parsing schema is 'SCOTT' try to change it using -
apex_application.g_flow_owner := 'HR'; --Failed
ALTER SESSION SET CURRENT_SCHEMA = 'HR'; --Failed
I don't understand what to do,Please some body help me for solving it.
I think you are on the right track, the apex_application.g_flow_owner := 'HR'; command should do the trick but you have to place it in shared components > security > security attributes> database session > Initialization PL/SQL Code
Edit: First of all having a schema for each user that logs into the application i do not think is the best approach. Just think that every modification has to be done to all the schemas. I suggest you take a look at Virtual Private Database (VPD) it can help you to control data access.
But if you still want to try changing the schema i think you can do it like this. Create two processes for each page in your application; one at On Load Before Header and one at On Submit. This process should contain something like this:
BEGIN
if :APP_USER='SCOTT' THEN
apex_application.g_flow_owner := 'SCOTT';
ELSE
apex_application.g_flow_owner := 'HR';
END IF
END;
Like this when Scott loads a page the schema is changed to SCOTT and he sees data from his schema. When HR loads a page the schema is changed to HR and he sees his data. Same thing when they submit a page; the schema first changes and then you do the other operations.
This second idea is not bullet proof and that's why i advise you to rethink what you want to do.
Edit2: In component view simply click on the plus sign on "Processes" to add a process and in the wizard select "On Submit - Before Computations and Validations" for the Point option.

Add metric name in OTSDB via API

I am adding data into OTSDB from different sources. But i give metric name for each data points using XML file. Also i dont have any access to OTSDB to create Metric Name via terminal
I have reffered below links :-
API PUT
GitHub Issue
In gitHub issue, i couldn't understand how to use --auto-metirc .
I know how to create metric using Terminal :-
Here i am creating abxcs metirc using terminal.
./tsdb mkmetric abxcs
But How to create metric using API?
FYI :- Please suggest solution using JAVA
Thanks for help in advance.
In order to have metric names auto created on-the-fly, you'll need to set
tsd.core.auto_create_metrics = true
in the OpenTSDB configuration file. Ref: http://opentsdb.net/docs/build/html/user_guide/configuration.html
Whether or not a data point with a new metric will assign a UID to the metric. When false, a data point with a metric that is not in the database will be rejected and an exception will be thrown.
CLI equivalent of it is to pass --auto-metric switch while starting tsd process.

Maximo Web Service Data Filter

I've created an enterprise web service in maximo that uses extsys1. In extsys1 I've created a duplicate of MXPERSONInterface and managed to create a query from it (sync was default). Now when I finished my web service I can succesfully query maximo from soap ui client and get all the person data but what I'd like to know is, can I select which data I want to export in my response ? Like...ignoring everything except name/lastname/email or anything like that.
If anyone did that / knows how with any other mbo any help would be very much appriciated. The thing is I don't want all the raw data being in my response, want to make it as much user-friendly as I can.
There is a way to do import/export of data via Web Services that are
dynamically accessed from external applications.
Another thing to note when you're accessing pre-defined object structures in
this way is that the response will always contain every single field that exists
in that object structure.
I will write down a brief tutorial on how to filter that data so that when
you query your object structure you only get a partition of the data in the response.
For the sake of this tutorial I will use MXPERSON and will export Firstname, Lastname, City,
Country and Postalcode.
First go to Integration > Object Structures > Create New Object Structure.
Name it My_MXPERSON, set to be consumed by INTEGRATION, set Authorized application PERSON and add new row for Source Objects and select Person from object list. Now you can go to More Actions > Include/Exclude Fields. Here you should un-check everything except Firstname, Lastname, City, Country and Postalcode (only them need to be CHECKED). Click save.
Now we need to create an enterprise service by going to Inegration > Enterprise Services > New Enterprise Service. Call your service My_MXPERSON_ES, for Operation set QUERY and for Object
Structure select your My_MXPERSON you created earlyer. Click save.
Next thing is to create a publish channel by going Integration > Publish Channels > New Publish
Channel. Name it My_MXPERSON_PC and for Object Structure select your My_MXPERSON (If you can't find it on the list go to your Object structure and uncheck "Query Only" box. Click save.
Now you have everything set up to create your external system. Integration > External Systems > New External System. name it My_MXPERSON_EXTSYS, set End Point to which format you want your response
to be in, I use MXXMLFILE. On the left side you have 3 typees of queue you need to set up, I have 1 option for first 2 and 2 for last one (select the upper one - ends with cqin). Check Enabled.
Within your External System go to Publish Channels and Select your My_MXPERSON_PC, enable it.
Within your External System go to Enterprise Services and Select your My_MXPERSON_ES, enable it it. Click save.
Last thing you need to do before you're done is to create your web service, go to Integration >
Web Services > New Web Service from Enterprise Service. Name it My_MXPERSON_Query, and select from list My_MXPERSON_EXTSYS_My_MXPERSON_ES, select your Web Service from the list and go to more actions > deploy.
Once your Web Service is deployed you can access the wsdl file from servername/meaweb/wsdl/webservicename.wsdl .
For test here we will use SoapUI to test the wsdl file.
Create a new Soap project and copy / paste the url of the wsdl file. If it loads succesfully paste this in the xml request field.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:max="http://www.ibm.com/maximo">
<soapenv:Header/>
<soapenv:Body>
<max:QueryMy_MXPERSON baseLanguage="EN" transLanguage="EN">
<max:My_MXPERSONQuery>
<max:PERSON>
<max:Firstname> Name you want to query </max:Firstname>
</max:PERSON>
</max:My_MXPERSONQuery>
</max:QueryMy_MXPERSON>
</soapenv:Body>
</soapenv:Envelope>
Remember to swap "Name you want to query" with the actual name in your table.
Hope this guide helped.
Using Maximo 7.5.0.5, Go To > Integration > External Systems
In External Systems, pick your system that you want to filter records for
Go to the Publish Channels tab
Click on Data Export
In the Export Condition field, enter your where clause to filter your record set
I referenced these steps from IBM Help:
http://publib.boulder.ibm.com/infocenter/tivihelp/v27r1/index.jsp?topic=%2Fcom.ibm.itam.doc%2Fmigrating%2Ft_asset_disposal_export_data.html
Normally, I just reference the link. In my experience though, IBM's web site frequently changes URL structure and occasionally goes offline for "maintenance". For accessibility, I am including the text here. No offense to copyright.
Exporting asset disposal data
To provide information for review or for a company that you hire to dispose of assets, you can use the integration framework applications to export a data file with information about assets that you are planning to dispose of.
Before you begin
Before you attempt to export a file, check that the following tasks are completed:
JMS queues are configured. You can use either continuous queue or sequential queue, depending on your business process.
The external system for asset disposal integration is enabled.
The publish channel is enabled.
About this task
The following procedure explains how to export asset disposal data.
Procedure
1) On the navigation bar, click Go To > Integration > External Systems.
2) On the List tab, select the TAMITEXTSYS external system.
3) On the Publish Channels tab of the External Systems application, select the ITASSETDISPOSAL publish channel and click Data Export.
4) In the Export Condition field in the Data Export window, enter an SQL statement that is appropriate for the Maximo® database that you use. This statement specifies the export condition.
Typically conditions filter by location, by site ID, and by status, as shown in the following example.
location = 'DISPOSAL' and siteid = 'BEDFORD' and status not in ('DECOMMISSIONED','DISPOSED')
The SQL statement must use the database names for attributes as shown in the field help. To view the field help, position the cursor in a field and press Alt+F1. The field help displays the database table and column (attribute) in the following format: ASSET.SITEID, where SITEID is the attribute name.
5) Click OK to export the asset data.
What to do next
The location to which the file is exported depends on the global directory set for the system and on the filedir parameter for the endpoint of the external system. If no global directory is set, look in the root of the application server folder. If no filedir parameter is set for the external system, look in the 'flatfiles' sub-directory. For example,
C:\bea\user_projects\domains\maximo_database\flatfiles\TAMITEXTSYS_ITASSETDISPOSALInterface_1236264695765361846.dat
Another way to locate the file is to search the operating system file structure for TAMITEXTSYS_ITASSETDISPOSALInterface*.dat.

Sending multiple emails with data from rows in Talend Open Studio

I m working of a project of Enterprise application architecture using software talend
i have this table : User(Id_user, name_user, Email)
what i want to do is select Data from this table and sending email to each user using Tsendemail component
i could so far make a connection to Database using TMssinput and send a single email using Tsendemail
but i dont know how to select values of Row and use them as "email" for Tsendemail
Can someone help me pls ? and thank you
As tSendMail component is not a processing component (ie, it cannot handles more than one vector in input) but a starting component, the best way to do so is to use the good-ol' tFlowToIterate as we did here. Your job will almost look like:
tMssInput---row---->tFlowtoIterate--->Iterate---->tSendEmail
Inside the tFlowToIterate instance you're going to put everything you need from row into the globalMap. Every data-processing operation should be done before that, on the row context (for example, filtering out users you won't the mail to be sent, etc.).