Error related to datastage master - datastage

I created a server job which is supplied from an oracle table
the job is related to a Master. When launching the master, it abort with the following error message "Abnormal termination ", I found a trick to solve the problem, recompling the job before launching the master was the solution.
I'd like to solve this issue, any help will be appreciated
Thank you

Related

ADF Dataflow stuck IN progress and fail with below errors

ADF Pipeline DF task is Stuck in Progress. It was working seamlessly last couple of months but suddenly Dataflow stuck in progress and Time out after certain time. We are using IR managed Virtual Network. I am using forereach loop to run data flow for multiple entities parallel, it always randomly get stuck on last Entity.
What can I try to resolve this?
Error in Dev Environment
Error Code 4508
Spark cluster not found
Error in Prod Environment:
Error code
5000
Failure type
User configuration issue
Details
[plugins.*** ADF.adf-ir-001 WorkspaceType:<ADF> CCID:<f289c067-7c6c-4b49-b0db-783e842a5675>] [Monitoring] Livy Endpoint=[https://hubservice1.eastus.azuresynapse.net:8001/api/v1.0/publish/815b62a1-7b45-4fe1-86f4-ae4b56014311]. Livy Id=[0] Job failed during run time with state=[dead].
Images:
I tried below steps:
By changing IR configuring as below
Tried DF Retry and retry Interval
Also, tried For each loop one batch at a time instead of 4 batch parallel. None of the above trouble-shooting steps worked. These PL is running last 3-4 months without a single failure, suddenly they started to fail last 3 days consistently. DF flow always stuck in progress randomly for different entity and times out in one point by throwing above errors.
Error Code 4508 Spark cluster not found.
This error can cause because of two reasons.
The debug session is getting closed till the dataflow finish its transformation in this case recommendation is to restart the debug session
the second reason is due to resource problem, or an outage in that particular region.
Error code 5000 Failure type User configuration issue Details [plugins.*** ADF.adf-ir-001 WorkspaceType: CCID:] [Monitoring] Livy Endpoint=[https://hubservice1.eastus.azuresynapse.net:8001/api/v1.0/publish/815b62a1-7b45-4fe1-86f4-ae4b56014311]. Livy Id=[0] Job failed during run time with state=[dead].
A temporary error is one that says "Livy job state dead caused by unknown error." At the backend of the dataflow, a spark cluster is used, and this error is generated by the spark cluster. to get the more information about error go to StdOut of sparkpool execution.
The backend cluster may be experiencing a network problem, a resource problem, or an outage.
If error persist my suggestion is to raise Microsoft support ticket here

why autoscaling is not creating in the cloudformation

While creating AutoScalingGroup by using CloudFormation template I am facing this error ,the error is :
Failed to receive 1 resource signal(s) for the current batch. Each resource
signal timeout is counted as a FAILURE.
How can I resolve this issue? please suggest me.
Thanks in advance

Deployed jobs stopped working with an image error?

In the last few hours I am no longer able to execute deployed Data Fusion pipeline jobs - they just end in an error state almost instantly.
I can run the jobs in Preview mode, but when trying to run deployed jobs this error appears in the logs:
com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Selected software image version '1.2.65-deb9' can no longer be used to create new clusters. Please select a more recent image
I've tried with both an existing instance and a new instance, and all deployed jobs including the sample jobs give this error.
Any ideas? I cannot find any config options for what image is used for execution
We are currently investigating an issue with the image for Cloud Dataproc used by Cloud Data Fusion. We had pinned a version of Dataproc VM image for the launch that is causing an issue.
We apologize for you inconvenience. We are working to resolve the issue as soon as possible for you.
Will provide update on this thread.
Nitin

Streamsets Pipeline to ingest files to HDFS throwing misleading "File not Found" Exception

We have a Streamsets job set up. Which although it runs successfully throws the following error:
"UNKNOWN com.streamsets.pipeline.api.StageException: SPOOLDIR_35 -
Spool Directory Runner Failed. Reason
java.nio.file.NoSuchFileException: "
The error is ‘file not found’ but actually the file is processed successfully and still the error is raised. This happens intermediately and not for all the files that are being processed.
Here's some background about the job:
The pipeline reads files from the linux edge node and ingests them
into HDFS
The error occurs on the ‘read’ stage
We have been running the same pipeline for almost 2 years and have
not seen this issue until the last month or so. Nothing about our
process has changed recently. The intermittent errors seem to
coincide with the latest StreamSets upgrade.
We process about 7
files every 2 hours through this pipeline, so roughly 84 files a day,
and the intermittent error seems to occur on 1-3 files per day. All
files are still processed in to HDFS.
Any idea why this happens?
It looks like you might be hitting SDC-9740. Please watch/vote/comment on this issue, especially if you can provide any more detail that might help us narrow down the cause. It's a P1, so it should be fixed in the next release.

Parallel Shared Container in datastage Job

I created a Datastage Parallel job with Parallel Shared Container. The job was working fine. I did not make any change to the job. Suddenly the job has started failing with below error since 1 day:
main_program: Failed to create collation sequence from IBM InfoSphere
DataStage Enterprise Edition 9.1.0.6791 . Failed to create collation
sequence from IBM InfoSphere DataStage Enterprise Edition 9.1.0.6791.
Has anyone has faced similar issue?
Please help and let me know for further clarification.
The above mentioned error came due to incorrect NLS in the present transformers in the Container. I changed the NLS to ProjectDefault(OFF) and the issue got solved.