Is there any way in datastage to fetch the isuser(username of the project) to the target table (in user column) - datastage

I have a suitable job which loads data from source to target, but in addition I want to display the isuser (isuser : which is the username of that respective project) of the Datastage Designer on my target table. It should show the isuser in the user column. Can you help me what steps should I follow.
Note
Table headers are as follows
+----+------+--------+--------+--------------+
| id | name | f_name | l_name | user(isuser) |
+----+------+--------+--------+--------------+

You can interrogate the DSODB tables JobRun and JobExec to collect this information. The table structures are documented at
https://docs-ugi.mybluemix.net/docs/content/SSZJPZ_11.7.0/com.ibm.swg.im.iis.ds.monitor.ref.doc/topics/jobruntable.html

You can use the Transformer stage before the target stage and use Stage Variable in the Transformer. You can set isuser (or dsadm) user as job parameter from job properties, and use this job parameter in the Stage Variables Derivation. Right click on Stage Variables box in Transformer and select Append New Stage Variable. If you click on … and select Job Parameter, it will list all defined job parameter, which then you can select the one set for the username. Afterwards, drag the column to the target table, to that user column in the output link.

The DataStage macro DSProjectName returns the project name of the running job.
Unfortunately there is no macro for the executing user.
You could derive this in an Execute Command activity in the controlling sequence, interrogating appropriate shell variables such as $USER or executing a command such as id and passing this into your job as a job parameter. Or indeed, you could add the environment variable $USER itself as a job parameter.

Related

Additional Column in ADF through HTTP linked service

I would like to add an additional column during a copy activity. I cannot use the Getmetadata activity because it is through a hhtp linked service.
However, I am using a parameter called filename in order to specify the file. Would it be possible to input the above parameter in the additional column.
enter image description here
You cannot view the Dataset parameter in the pipeline directly.
Alternately you can use a pipeline parameter to provide input to Dataset parameter and use the same pipeline parameter in the Additional Column value.

How to get the latest file from a fileshare by using ADF?

The files which gets loaded in my file share are like below and each day the data are getting loaded:-
xyz_activation_2021-11-06T00-01-13.csv
xyz_activation_2021-11-07T00-01-13.csv
xyz_activation_2021-11-08T00-01-13.csv
So now how can i get the latest date record in my pipeline using ADF.
Step1: Create Pipeline
Step2: Select Get Metadata activity
Step3:
Step4: In Get Metadata activity select Child Items to loop through your folder.
Step5: Select ForEach activity.
Step6: Inside ForEach Activity create Second Get Metadata activity.
Also Create 2 arguments as Item.Name and Last Modified.
Step7: Now create If Condition activity.
Inside If Condition activity use
#greater(formatDateTime(activity('Get Metadata2').output.lastModified,'yyyyMMddHHmmss'), formatDateTime(variables('LatestModifiedDate'),'yyyyMMddHHmmss'))
expression for True condition.
Step8: Create SetVariable activity and Use below expression.
#activity('Get Metadata2').output.lastModified
Step9: In last step I used Copy Activity to copy last created file from one container to another container.
This is my Input Container
This is my Output Container.

Talend open studio for data integration

In Talent open studio. i have to add different source file to one output table..How can i fetch the last id in that output table and generate the very next id and continue insertion with different sources
Add a subjob prior to your current subjob.
in a tDBINPut component select your lastID through a query (select maxID , select top 1 , etc) .
Put the result in a variable (global variable or context variable), in a tJavaRow for example. (context.lastID=input_row.id)
Use this variable in a tMap to generate the next ID, through Numeric.sequence function.
In the output mapping of your tmap, you should add something like Numeric.sequence(context.lastID,1,1)
I think there are plenty of solutions to get the lastID and generate a sequence from there. you can also check advanced output parameters on your tDBOutput.

Copy Data - How to skip Identity columns

I'm designing a Copy Data task where the Sink SQL Server table contains an Identity column. The Copy Data task always wants me to map that column when, in my opinion, it should just not include the column in the list of columns to map. Does anyone know how I can get the ADF Copy Data task to ignore Sink Identity columns?
If you are using copy data tool, and in your sql server, the ID is set as auto-increment, then it should not show out at the mapping step. Please tell us if it is not the case.
If you are using the create pipeline/dataset, you could just go to the sink dataset schema tab, remove the id column. And then go to the copy activity mapping tab, click import schemes again. ID column should has disappeared now.
You could include a SET_IDENTITY_INSERT_ON statement for the given table before executing the copy step. After completed, set it to OFF.

Datastage: set web server trasformer stage url from query

I need to set "PortAddress" and "WSDL Address" dinamically using the result of a query.
I've created the oracle Connector stage with my query. For example:
select col1,col2,col3,...,url
from myTable
How can I use "url" column value in the Web Service stage?
Thanks in advance.
This is a general problem not restricted to your web service transformer. You want to "transfer" data from a data stream to the Sequence level in order to feed it into the next job as a parameter.
Basically there are two main ways to do it:
Parallel Edition: In the first job where select the url from your database and write it to a value file of a parameter set. Use the parameter set in the second job with the new value file. Details see here
Server Edition: In a server job you select the data from your database in a transformer you can use a DataStage function (DSSetUserStatus) to set the so called UserStatus for this job. This can then be referenced in the next job of the Sequence.