Is there any possibility if the stage variable conversion is failed then capture the data into reject file - datastage

We have a stage variable using DateFromDaysSince(Date Column) in datastage transformer. Due to some invalid dates , datastage job is getting failed . We have source with oracle.
When we check the dates in table we didnt find any issue but while transformation is happening job is getting failed
Error: Invalid Date [:000-01-01] used for date_from_days_since type conversion
Is there any possibility to capture those failure records into reject file and make the parallel job run successfull .. ?

Yes it is possible.
You can use the IsValidDate or IsValidTimestamp function to check that - check out the details here
These functions could be used in a Transformer condition to move rows not showing the expected type to move to a reject file (or peek).
When your data is retrieved from a database (as mentioned) the database ensures the datatype already - if the data is stored in the appropriate format. I suggest checking the retrieval method to avoid unnecessary checks or rejects. Different timestamp formats could be an issue.

Related

Jmeter: how to save both success and failure soap requests values into separate CSV files

I want to run the Jmeter for a soap service and save both success and failed request values into separate csv files. First of all would like to know if this is possible. I am using CSV input file for generating the request. I could see some posts here but I dont know how we can extract multiple specific values from the soap request upon the status of response. As I mentioned I want to do this for both success and failure responses.
Tried adding XPath2 Extractor and I could see debug sampler is printing values but not sure how to get them into sepaerate csv files. First of all is that doable ?
Update I just realized that my original question was wrong. Saving response wont help much to identify the failed records. My idea is to identify both failed and successful records. Need to do some business logic on failed records. Is there anyway I can do that? I would like to get all those fields from CSV file to both success and failed output files. Thanks in advance
If you want to save successful and not successful responses into separate files you can use Simple Data Writer:
For successful responses
For failures:
If the file has to be CSV and you need only variable values I can think of Flexible File Writer, it doesn't allow you to filter successful and failed responses, however you can add an appropriate column to CSV and filter it later on.
And finally you can just use Sample Variables property and variable values will be stored into JMeter's .jtl results file. It's in CSV format and contains the information regarding whether the request was successful or not and filtering it in Excel or equivalent will be quite easy.

How to capture an Error Message into a file in DataStage

Is it possible to capture the error message/error field into a file in DataStage?
Like if some error occurs in Transformer Stage, then is it possible to capture the error and the field which had the error into a file? As of now, I am able to capture the entire error record into a file but not the error message or just the error field.
Thanks!!!
Basically, no. Certainly there is no generic solution. You could create a rejects link from the Transformer stage, but even that is limited in its capability.
I suspect you would be better served reading the error information from the job log, and processing that.

Mapping Data Flows Error The stream is either not connected or column is unavailable

I have a meta-data driven pipeline and a mapping data flow to load my data. When I try to run this pipeline, I get the following error.
{"message":"at Derive 'TargetSATKey'(Line 42/Col 26): Column 'PersonVID' not found. The stream is either not connected or column is unavailable. Details:at Derive 'TargetSATKey'(Line 42/Col 26): Column 'PersonVID' not found. The stream is either not connected or column is unavailable","failureType":"UserError","target":"Data Vault Load","errorCode":"DFExecutorUserError"}
When I debug the mapping data flow, all the components in the data flow work as intended.
I guess that my source connection parameters aren't flowing through properly. Below is an email of my source connection
Please let me know if you have any thoughts and questions
I found a resolution to my problem. The error was the data being passed in was a string but when the variable was unpacked my variable value didn't have a quote around it. Putting in the quotes fixed it.
For example
'BusinessEntityID'
Please let me know if there are any questions

How to force to set Pipelines' status to failed

I'm using Copy Data.
When there is some data error. I would export them to a blob.
But in this case, the Pipelines's status is still Succeeded. I want to set it to false. Is it possible?
When there is some data error.
It depends on what error you mentioned here.
1.If you mean it's common incompatibility or mismatch error, ADF supports built-in feature named Fault tolerance in Copy Activity which supports below 3 scenarios:
Incompatibility between the source data type and the sink native
type.
Mismatch in the number of columns between the source and the sink.
Primary key violation when writing to SQL Server/Azure SQL
Database/Azure Cosmos DB.
If you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.windows.net/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.
If you want to abort the job as soon as any error occurs,you could set as below:
Please see this case: Fault tolerance and log the incompatible rows in Azure Blob storage
2.If you are talking about your own logic for the data error,may some business logic. I'm afraid that ADF can't detect that for you, though it's also a common requirement I think. However,you could follow this case (How to control data failures in Azure Data Factory Pipelines?) to do a workaround. The main idea is using custom activity to divert the bad rows before the execution of copy activity. In custom activity, you could upload the bad rows into Azure Blob Storage with .net SDK as you want.
Update:
Since you want to log all incompatible rows and enforce the job failed at the same time, I'm afraid that it can not be implemented in the copy activity directly.
However, I came up with an idea that you could use If Condition activity after Copy Activity to judge if the output contains rowsSkipped. If so, output False,then you will know there are some skip data so that you could check them in the blob storage.

Mule: after delivering a message, save the current timestamp for later use. What's the correct idiom?

I'm connecting to a third-party web service to retrieve rows from the underlying database. I can optionally pass a parameter like this:
http://server.com/resource?createdAfter=[yyyy-MM-dd hh:ss]
to get only the rows created after a given date.
This means I have to store the current timestamp (using #[function:datestamp:...], no problem) in one message scope and then retrieve it in another.
It also implies the timestamp should be preserved in case of an outage.
Obviously, I could use a subflow containing a file endpoint, saving in a designated file on a path. But, intuitively, based on my (very!) limited experience, it feels hackish.
What's the correct idiom to solve this?
Thanks!
The Object Store Module is designed just for that: to allow you to save bits of information from your flows.
See:
http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html
https://github.com/mulesoft/mule-module-objectstore/