Failure while Loading Parquet file to Azure Synapse - azure-data-factory

I load pre-generated Parquet files (present in Azure Blob Storage) to Azure synapse tables everyday. The job ran successfully until yesterday, but all of sudden it's failing.
Note: A Test Copy Job Executes Successfully when Parquet File is Loaded to a CSV in the same Blob Storage.
Error Message while loading Parquet File to Synapse:
Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message:
java.io.IOException:Error reading summaries total entry:6
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:190)
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:112)
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:45)
org.apache.parquet.hadoop.ParquetReader$Builder.build(ParquetReader.java:20
2)
com.microsoft.datatransfer.bridge.parquet.ParquetBatchReaderBridge.open(ParquetBatchReaderBridge.java:62)
com.microsoft.datatransfer.bridge.parquet.ParquetFileBridge.createReader(ParquetFileBridge.java:22)
java.util.concurrent.ExecutionException:java.lang.ExceptionInInitializerError total entry:9
java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
org.apache.parquet.hadoop.ParquetFileReader.runAllInParallel(ParquetFileReader.java:227)
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:185)
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:112)
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:45)
org.apache.parquet.hadoop.ParquetReader$Builder.build(ParquetReader.java:202)
com.microsoft.datatransfer.bridge.parquet.ParquetBatchReaderBridge.open(ParquetBatchReaderBridge.java:62)
com.microsoft.datatransfer.bridge.parquet.ParquetFileBridge.createReader(ParquetFileBridge.java:22) java.lang.ExceptionInInitializerError:null total entry:24
org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)
org.apache.hadoop.security.Groups.<init>(Groups.java:86)
org.apache.hadoop.security.Groups.<init>(Groups.java:66)
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248)
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763)
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621)
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753)
org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745)
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
org.apache.parquet.hadoop.ParquetFileReader.readSummaryMetadata(ParquetFileReader.java:360)
org.apache.parquet.hadoop.ParquetFileReader$1.call(ParquetFileReader.java:158)
org.apache.parquet.hadoop.ParquetFileReader$1.call(ParquetFileReader.java:155) java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) java.base/java.lang.Thread.run(Thread.java:844) java.lang.StringIndexOutOfBoundsException:begin 0, end 3, length 1 total entry:27 java.base/java.lang.String.checkBoundsBeginEnd(String.java:3116)
java.base/java.lang.String.substring(String.java:1885)
org.apache.hadoop.util.Shell.<clinit>(Shell.java:49) org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)
org.apache.hadoop.security.Groups.<init>(Groups.java:86) org.apache.hadoop.security.Groups.<init>(Groups.java:66) org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280) org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271) org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248) org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763) org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621) org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753) org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745) org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611) org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354) org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) org.apache.parquet.hadoop.ParquetFileReader.readSummaryMetadata(ParquetFileReader.java:360) org.apache.parquet.hadoop.ParquetFileReader$1.call(ParquetFileReader.java:158) org.apache.parquet.hadoop.ParquetFileReader$1.call(ParquetFileReader.java:155)
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
java.base/java.lang.Thread.run(Thread.java:844) .,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,

Related

Debezium: Replicating data from Oracle read-only database

My use case is to capture CDC data from Oracle read-only PDB database using Debezium. When I tried installing and run Debezium, it's throwing below error message
Can someone please help with the correct Debezium config?
ORA-16000: database or pluggable database open for read-only access
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:628)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:562)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1145)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:726)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:291)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:492)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:144)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1034)
at oracle.jdbc.driver.OracleStatement.executeSQLStatement(OracleStatement.java:1507)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1287)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3735)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3933)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4279)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1010)
at io.debezium.connector.oracle.logminer.LogMinerHelper.executeCallableStatement(LogMinerHelper.java:701)
at io.debezium.connector.oracle.logminer.LogMinerHelper.createFlushTable(LogMinerHelper.java:105)
at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:122)
at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:63)
at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:159)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:122)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: Error : 604, Position : 0, Sql = CREATE TABLE LOG_MINING_FLUSH(LAST_SCN NUMBER(19,0)), OriginalSql = CREATE TABLE LOG_MINING_FLUSH(LAST_SCN NUMBER(19,0)), Error Msg = ORA-00604: error occurred at recursive SQL level 1
ORA-16000: database or pluggable database open for read-only access
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:632)
... 24 more

read and transform parquet files in cloud data fusion

Trying to ingest and transform a parquet file in cloud data fusion. I can see that I can ingest the parquet file using the GCS plugin. But when I want to transform it using the wrangler plugin I don't see any capability to do that. Does the wrangler plugin have that ability at all or should I consider another approach? BTW, I just deployed my pipeline to see if I'm able to ingest the parquet file from GCS, but I see this error in the logs:
java.lang.NoClassDefFoundError: org/xerial/snappy/Snappy
at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62) ~[parquet-hadoop-1.8.3.jar:1.8.3]
at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51) ~[parquet-hadoop-1.8.3.jar:1.8.3]
at java.io.DataInputStream.readFully(DataInputStream.java:195) ~[na:1.8.0_275]
at java.io.DataInputStream.readFully(DataInputStream.java:169) ~[na:1.8.0_275]
at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:204) ~[parquet-encoding-1.8.3.jar:1.8.3]
at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.<init>(PlainValuesDictionary.java:89) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.<init>(PlainValuesDictionary.java:72) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:90) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.Encoding$4.initDictionary(Encoding.java:149) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:343) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:82) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:77) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:270) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:135) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:101) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:154) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:101) ~[parquet-column-1.8.3.jar:1.8.3]
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:140) ~[parquet-hadoop-1.8.3.jar:1.8.3]
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) ~[parquet-hadoop-1.8.3.jar:1.8.3]
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) ~[parquet-hadoop-1.8.3.jar:1.8.3]
at io.cdap.plugin.format.parquet.input.PathTrackingParquetInputFormat$ParquetRecordReader.nextKeyValue(PathTrackingParquetInputFormat.java:76) ~[1614054281928-0/:na]
at io.cdap.plugin.format.input.PathTrackingInputFormat$TrackingRecordReader.nextKeyValue(PathTrackingInputFormat.java:136) ~[na:na]
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper.nextKeyValue(CombineFileRecordReaderWrapper.java:90) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:214) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:129) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:141) [spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83) [spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78) [spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) [spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.scheduler.Task.run(Task.scala:109) [spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) [spark-core_2.11-2.3.4.jar:2.3.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_275]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_275]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_275]
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.Snappy
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_275]
at io.cdap.cdap.common.lang.InterceptableClassLoader.findClass(InterceptableClassLoader.java:44) ~[na:na]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_275]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_275]
... 43 common frames omitted
Do I need to install a specific module on my cluster (Is that possible at all)?
What image version of Dataproc were you using?
It seems that some versions of Dataproc 2.0 didn't include the Snappy libraries under the Hadoop.
As a workaround you could copy the snappy jars to the following directories:
sudo cp /usr/lib/hive/lib/snappy-java-*.jar /usr/lib/hadoop-mapreduce/
sudo cp /usr/lib/hive/lib/snappy-java-*.jar /usr/lib/hadoop/lib
sudo cp /usr/lib/hive/lib/snappy-java-*.jar /usr/lib/hadoop-yarn/lib
This can be done in a Dataproc init-action [1] script, through a Data Fusion Compute Profile as follows:
In Data Fusion, go to: System Admin -> Configuration -> System Compute Profiles -> (create new) -> Advanced Settings -> Initialization Actions.
Otherwise, on a deployed pipeline, in studio, go to Configure -> Compute Config -> (select profile) -> Customize -> Advanced Settings -> Initialization Actions.
Note: these options are not available in Developer instances, only Basic/Enterprise.
[1] https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/init-actions

Spark 3 stream job fails with Cannot run program "chmod"

Spark 3.0 on Kubernetes reading data from Kafka and pushing data out using via 3rd party Segment IO REST API.
I am facing below error while running an Spark stream job
Caused by: java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:938)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:865)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:252)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:351)
at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1228)
at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100)
at org.apache.hadoop.fs.ChecksumFs$ChecksumFSOutputSummer.<init>(ChecksumFs.java:353)
at org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400)
at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:696)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:692)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.create(FileContext.java:698)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:310)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:316)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.writeBatchToFile(HDFSMetadataLog.scala:131)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.$anonfun$add$3(HDFSMetadataLog.scala:120)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add(HDFSMetadataLog.scala:118)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:588)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:598)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:585)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
... 1 more
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
Check your PATH Environment Variable.
(Maybe you override it to add some spark/kafka jars to path?)

ERROR: com.streamsets.pipeline.api.StageException: JDBC_52 - Error starting LogMiner

I am getting the following error while running oracle cdc since today morning it was running fine but get continues errors from this morning.
What is the exact reason for this error?
The pipeline, cdc_test stopped at 2019-06-15 13:37:46 due to the following error:
UNKNOWN com.streamsets.pipeline.api.StageException: JDBC_52 - Error
starting LogMiner at
com.streamsets.pipeline.stage.origin.jdbc.cdc.oracle.OracleCDCSource.startGeneratorThread(OracleCDCSource.java:454)
at
com.streamsets.pipeline.stage.origin.jdbc.cdc.oracle.OracleCDCSource.produce(OracleCDCSource.java:325)
at
com.streamsets.pipeline.api.base.configurablestage.DSource.produce(DSource.java:38)
at
com.streamsets.datacollector.runner.StageRuntime.lambda$execute$2(StageRuntime.java:283)
at
com.streamsets.pipeline.api.impl.CreateByRef.call(CreateByRef.java:40)
at
com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:235)
at
com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:298)
at
com.streamsets.datacollector.runner.StagePipe.process(StagePipe.java:219)
at
com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.processPipe(ProductionPipelineRunner.java:810)
at
com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.runPollSource(ProductionPipelineRunner.java:554)
at
com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.run(ProductionPipelineRunner.java:383)
at com.streamsets.datacollector.runner.Pipeline.run(Pipeline.java:527)
at
com.streamsets.datacollector.execution.runner.common.ProductionPipeline.run(ProductionPipeline.java:109)
at
com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunnable.run(ProductionPipelineRunnable.java:75)
at
com.streamsets.datacollector.execution.runner.standalone.StandaloneRunner.start(StandaloneRunner.java:703)
at
com.streamsets.datacollector.execution.AbstractRunner.lambda$scheduleForRetries$0(AbstractRunner.java:349)
at
com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:226)
at
com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:33)
at
com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
com.streamsets.datacollector.metrics.MetricSafeScheduledExecutorService$MetricsTask.run(MetricSafeScheduledExecutorService.java:100)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.sql.SQLException: ORA-01291: missing logfile ORA-06512: at
"SYS.DBMS_LOGMNR", line 58 ORA-06512: at line 1
Typically, this means that while the pipeline was stopped, Oracle deleted one or more logfiles, so the pipeline cannot pick up where it left off.
This blog entry gives a lot of detail on the issue and steps to resolve it: https://streamsets.com/blog/replicating-oracle-mysql-json#ora-01291-missing-logfile

Merge Operation Fails -gpload utility greenplum

We would like try to describe my problem below:
We have small gpdb cluster. In that,we are trying for Data integration using Talend tool.
We are trying to load the incremental from a table to another table, quite simple... I thought...
Job Data Flow is
tgreenplumconnection
|
tmssqlinput--->thdfsoutput-->tmap-->tgreenplumgpload--tgreenplumcommit
Getting error
Exception in thread "Thread-1" java.lang.RuntimeException: Cannot run program "gpload": CreateProcess error=2, The system cannot find the file specified
at bigdata.sormaster_stg0_copy_0_1.SorMaster_stg0_Copy$2.run(SorMaster_stg0_Copy.java:6425)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at java.lang.Runtime.exec(Runtime.java:620)
at java.lang.Runtime.exec(Runtime.java:528)
at bigdata.sormaster_stg0_copy_0_1.SorMaster_stg0_Copy$2.run(SorMaster_stg0_Copy.java:6413)