I have configured a Jenkins job to call Talend data integration job Build.
The Talend components in job are check boxed with die on error. When talend job fails it displays the error but the jenkins job still shows it as success.
How to catch talend failure exit code in jenkin.
I have enabled die on error for each component in Talend job build
D:\JENKINS-WS\Cloud_Insights\workspace\E2CI-DB-ORACLE-SJ-INTEGRATION\TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI>java -Xms256M -Xmx1024M -cp .;../lib/routines.jar;../lib/activation.jar;../lib/dom4j-1.6.1.jar;../lib/log4j-1.2.16.jar;../lib/mail-1.4.jar;trigger_load_oracle_sj_db_to_e2ci_0_1.jar;load_oracle_sj_db_stg_to_fct_0_1.jar;load_oracle_sj_db_csv_to_stg_0_1.jar;load_oracle_sj_db_stg_to_dim_0_1.jar;load_oracle_sj_db_dim_to_lu_0_1.jar; e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI --context=DEV
tRunJob_1 in TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI call LOAD_ORACLE_SJ_DB_CSV_TO_STG with:
Exception in component tRunJob_1 (TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI)
java.lang.RuntimeException: Child job returns 1. It doesn't terminate normally.
Exception in component tFileList_1 (LOAD_ORACLE_SJ_DB_CSV_TO_STG)
java.lang.RuntimeException: No file found in directory \prod4271\E2CI-DBOPS\IN
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.tFileList_1Process(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:1421)
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.runJobInTOS(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:5292)
at e2ci_db_integration.load_oracle_sj_db_csv_to_stg_0_1.LOAD_ORACLE_SJ_DB_CSV_TO_STG.main(LOAD_ORACLE_SJ_DB_CSV_TO_STG.java:5131)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.tRunJob_1Process(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:736)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.runJobInTOS(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:3192)
at e2ci_db_integration.trigger_load_oracle_sj_db_to_e2ci_0_1.TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.main(TRIGGER_LOAD_ORACLE_SJ_DB_TO_E2CI.java:3031)
Triggering a new build of E2CI-DB-ORACLE-CHG-INTEGRATION
Finished: SUCCESS
Related
Error details below
Execution optimizations have been disabled for task ':app:compressDevelopmentDebugAssets' to ensure correctness due to the following reasons:
- Gradle detected a problem with the following location: 'C:\Users\LAP\Documents\myapp\build\app\intermediates\merged_assets\developmentDebug\out'. Reason: Task ':app:compressDevelopmentDebugAssets' uses this output of task ':app:copyFlutterAssetsDevelopmentDebug' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.4/userguide/validation_problems.html#implicit_dependency for more details about this problem.
> Task :app:compressDevelopmentDebugAssets FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compressDevelopmentDebugAssets'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.CompressAssetsWorkAction
> java.lang.OutOfMemoryError (no error message)
I have already added below in my build.gradle
dexOptions {
javaMaxHeapSize "4G"
}
End goal is to be able to run the integration tests I have on firebase test lab
Following this doc steps been shared on firebase
https://github.com/flutter/flutter/tree/main/packages/integration_test#android-device-testing
I tried to run Azure DevOps tests in a build pipeline.
Tests are executed on a new agent, i got the following error.
Setup Azure DevOps
##[error]The slice of type 'Discovery' is 'Aborted' because of the error : System.Exception: NUnit Adapter 4.0.0.0: Test discovery complete
Received the command : Stop
TestExecutionHost.ProcessCommand. Stop Command handled
SliceFetch Aborted. Moving to the TestHostEnd phase
Test run '1007278' is in 'Aborted' state.
##[error]Test run is aborted. Logging details of the run logs.
##[error]System.Exception: The test run was aborted, failing the task.
The Problem is that after slice process the test case filter isn't working, solution was the rename to “TestCategory”, before it was “Category“ for TestCaseFilter
Baseline update on endeca is failing. Please find the logs below:
INFO: Finished pushing content to dgraph.
INFO: [AuthoringMDEXHost] Starting shell utility 'rmdir_dgraph-input-old'.
INFO: [LiveMDEXHostA] Starting shell utility 'cleanDir_local-dgraph-input'.
INFO: [LiveMDEXHostA] Starting shell utility 'rmdir_dgraph-input-old'.
SEVERE: Utility 'rmdir_dgraph-input-old' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host LiveMDEXHostA.
Occurred while executing line 7 of valid BeanShell script:
AuthoringDgraphCluster.copyIndexToDgraphServers();
AuthoringDgraphCluster.applyIndex();
LiveDgraphCluster.cleanDirs();
LiveDgraphCluster.copyIndexToDgraphServers();
LiveDgraphCluster.applyIndex();
SEVERE: Error executing valid BeanShell script.
Occurred while executing line 19 of valid BeanShell script:
Dgidx.run();
// distribute index, update Dgraphs
DistributeIndexAndApply.run();
// Upload the generated dimension values to Workbench
WorkbenchManager.cleanDirs();
SEVERE: Caught an exception while invoking method 'run' on object 'BaselineUpdate'. Releasing locks.
Caused by java.lang.reflect.InvocationTargetException
sun.reflect.NativeMethodAccessorImpl invoke0 - null
Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing valid BeanShell script.
Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing valid BeanShell script.
Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException
com.endeca.soleng.eac.toolkit.utility.Utility run - Utility 'rmdir_dgraph-input-old' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host LiveMDEXHostA.
INFO: Released lock 'update_lock'.
Has anyone seen this type of error before? Please let me know the potential solution. Also Baseline update is taking 2 to 3 hours and then it's failing, it's annoying.
Thanks!
Check logs under
endeca/PlatformServices/workspace/logs/shell
There should be a log named like appName.rmdir_dgraph-input-old.log
with more info about the error.
Probably a non existing folder is trying to be removed or something like that.
If this is the case just create the folder that the utility is trying to remove and execute baseline again.
How to log job log whether the job is succeeded or failed into mongodb once the job has been compeleted in talend
If you want to save the joblog into table, then follow the below steps
Main job --> on subjob ok --> fixedflowinput with variables jobname, success then tdbxxoutput..
Main job --> on subjob error --> fixedflowinput with variables jobname, Fail then tdbxxoutput..
I tried starting parpool in MATLAB 2015b. Command as follows,
parpool('local',3);
This command should allocate 3 workers. Whereas I received an error stating failure to start parpool. The error message as follows,
Error using parpool (line 94)
Failed to start a parallel pool. (For information in addition to
the causing error, validate the profile 'local' in the Cluster Profile
Manager.)
A similar query was posted in (https://nl.mathworks.com/matlabcentral/answers/196549-failed-to-start-a-parallel-pool-in-matlab2015a). I followed the same procedure, to validate the local profile as per the suggestions.
Using distcomp.feature( 'LocalUseMpiexec', false); or distcomp.feature( 'LocalUseMpiexec', true) in startup.m didn't create any improvement. Also when attempting to validate local profile still gives error message as follows,
VALIDATION DETAILS
Profile: local
Scheduler Type: Local
Stage: Cluster connection test (parcluster)
Status: Passed
Description:Validation Passed
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Job test (createJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 24 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job24.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: SPMD job test (createCommunicatingJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 25 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job25.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: Pool job test (createCommunicatingJob)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Parallel pool test (parpool)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
I am receiving these error only in my cluster machine. But launching parpool in my standalone PC is working perfectly. Is there a way to rectify this issue?