Error when using DB2 from Apache Drill - db2

Wondering if anyone have managed to get DB2 (LUW) working with Apache Drill (v1.13.0).
Old driver throw exception (see comments below):
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.15.82]
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.fd.a(fd.java:680) ~[db2jcc4.jar:na]
at com.ibm.db2.jcc.am.fd.a(fd.java:60) ~[db2jcc4.jar:na]
at com.ibm.db2.jcc.am.fd.a(fd.java:103) ~[db2jcc4.jar:na]
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(ResultSet.java:4599) ~[db2jcc4.jar:na]
at com.ibm.db2.jcc.am.ResultSet.nextX(ResultSet.java:330) ~[db2jcc4.jar:na]
Latest jdbc driver (4.21.29):
Using e.g. query SELECT 'test' FROM db2.SYSIBM.SYSDUMMY1 gave error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: null SQL Query null [Error Id: 16121ad3-919b-44cb-b343-b71ec56314f7 on 10.21.238.244:31010]
Configuration:
{
"type": "jdbc",
"driver": "com.ibm.db2.jcc.DB2Driver",
"url": "jdbc:db2://host:50000/TESTDB",
"username": "db2inst1",
"password": "XXXXXXX",
"enabled": true
}
Full stacktrace is here:
2018-04-14 14:27:08,744 [252e1a73-4a10-5b33-00fa-6109db8680e2:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id 252e1a73-4a10-5b33-00fa-6109db8680e2: SELECT 'test' FROM db2.SYSIBM.SYSDUMMY1
2018-04-14 14:27:08,769 [252e1a73-4a10-5b33-00fa-6109db8680e2:foreman] INFO o.a.d.exec.planner.sql.SqlConverter - User Error Occurred (null)
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: null
SQL Query null
[Error Id: 16121ad3-919b-44cb-b343-b71ec56314f7 ]
at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633) ~[drill-common-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.SqlConverter.validate(SqlConverter.java:199) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode(DefaultSqlHandler.java:630) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:202) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:174) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:146) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:84) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:567) [drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) [drill-java-exec-1.13.0.jar:1.13.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.lang.NullPointerException: null
at org.apache.calcite.util.NameSet$1.compare(NameSet.java:40) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.util.NameSet$1.compare(NameSet.java:38) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at java.util.TreeMap.compare(TreeMap.java:1295) ~[na:1.8.0_144]
at java.util.TreeMap.put(TreeMap.java:538) ~[na:1.8.0_144]
at org.apache.calcite.util.NameMap.put(NameMap.java:54) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.jdbc.SimpleCalciteSchema.add(SimpleCalciteSchema.java:65) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.add(CalciteSchema.java:609) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.drill.exec.store.jdbc.JdbcStoragePlugin$JdbcCatalogSchema.setHolder(JdbcStoragePlugin.java:346) ~[drill-jdbc-storage-1.13.0.jar:1.13.0]
at org.apache.drill.exec.store.jdbc.JdbcStoragePlugin.registerSchemas(JdbcStoragePlugin.java:434) ~[drill-jdbc-storage-1.13.0.jar:1.13.0]
at org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory(DynamicRootSchema.java:81) ~[drill-java-exec-1.13.0.jar:1.15.0-drill-r0]
at org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema(DynamicRootSchema.java:66) ~[drill-java-exec-1.13.0.jar:1.15.0-drill-r0]
at org.apache.calcite.jdbc.CalciteSchema.getSubSchema(CalciteSchema.java:233) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorUtil.getSchema(SqlValidatorUtil.java:992) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:953) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:117) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.drill.exec.planner.sql.SqlConverter$DrillCalciteCatalogReader.getTable(SqlConverter.java:633) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.planner.sql.SqlConverter$DrillValidator.validateFrom(SqlConverter.java:261) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3216) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:947) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:928) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:226) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:903) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:613) ~[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.drill.exec.planner.sql.SqlConverter.validate(SqlConverter.java:190) [drill-java-exec-1.13.0.jar:1.13.0]
... 10 common frames omitted
2018-04-14 14:27:08,776 [qtp2102527385-110] ERROR o.a.d.e.server.rest.QueryResources - Query from Web UI Failed
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: null
SQL Query null
[Error Id: 16121ad3-919b-44cb-b343-b71ec56314f7 on 10.21.238.244:31010]
at org.apache.drill.exec.rpc.AbstractDisposableUserClientConnection.sendResult(AbstractDisposableUserClientConnection.java:85) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:782) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.QueryStateProcessor.checkCommonStates(QueryStateProcessor.java:325) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.QueryStateProcessor.planning(QueryStateProcessor.java:221) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.QueryStateProcessor.moveToState(QueryStateProcessor.java:83) ~[drill-java-exec-1.13.0.jar:1.13.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) ~[drill-java-exec-1.13.0.jar:1.13.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
2018-04-14 14:27:08,818 [252e1a73-4a10-5b33-00fa-6109db8680e2:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 queries to complete before shutting down
2018-04-14 14:27:08,818 [252e1a73-4a10-5b33-00fa-6109db8680e2:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 running fragments to complete before shutting down

Related

Kafka Connect Debezium error while filtering

For config:
curl -X POST "${KAFKA_CONNECT_HOST}/connectors" -H "Content-Type: application/json" -d '{
"name": "DebeziumSMS",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max": 1,
"database.hostname": "bill-srv02.corp.oblakowifi.ru",
"database.port": 1433,
"database.user": "sa",
"database.password": "******",
"database.dbname": "sms",
"database.server.name": "server-test",
"database.history.kafka.bootstrap.servers": "192.168.26.142:9092",
"database.history.kafka.topic": "schema-changes.sms",
"errors.log.enable": "true",
"database.history.skip.unparseable.ddl": "true",
"time.precision.mode": "connect",
"transforms": "filter",
"transforms.filter.type": "io.debezium.transforms.Filter",
"transforms.filter.language": "jsr223.groovy",
"transforms.filter.condition": "value.op == ''c''"
}
}'
this error:
[2021-10-19 12:50:46,234] ERROR Error encountered in task DebeziumSMS-0. Executing stage 'TRANSFORMATION' with class 'io.debezium.transforms.Filter'. (org.apache.kafka.connect.runtime.errors.LogReporter:66)
io.debezium.DebeziumException: Error while evaluating expression 'value.op == 'c'' for record 'SourceRecord{sourcePartition={server=server-test}, sourceOffset={transaction_id=null, event_serial_no=1, commit_lsn=00000064:0000420c:0003, change_lsn=00000064:0000420c:0002}} ConnectRecord{topic='server-test', kafkaPartition=0, key=Struct{databaseName=sms}, keySchema=Schema{io.debezium.connector.sqlserver.SchemaChangeKey:STRUCT}, value=Struct{source=Struct{version=1.5.0.Final,connector=sqlserver,name=server-test,ts_ms=1634637045699,db=,schema=,table=,change_lsn=00000064:0000420c:0002,commit_lsn=00000064:0000420c:0003},databaseName=sms,schemaName=dbo,ddl=N/A,tableChanges=[Struct{type=ALTER,id="sms"."dbo"."sms",table=Struct{primaryKeyColumnNames=[id],columns=[Struct{name=id,jdbcType=1,typeName=uniqueidentifier,typeExpression=uniqueidentifier,length=36,position=1,optional=false,autoIncremented=false,generated=false}, Struct{name=creator_login,jdbcType=12,typeName=varchar,typeExpression=varchar,length=64,position=2,optional=false,autoIncremented=false,generated=false}, Struct{name=system_ip_address,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=3,optional=false,autoIncremented=false,generated=false}, Struct{name=system_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=4,optional=false,autoIncremented=false,generated=false}, Struct{name=system_sub_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=5,optional=false,autoIncremented=false,generated=false}, Struct{name=type_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=6,optional=true,autoIncremented=false,generated=false}, Struct{name=date_create,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=7,optional=false,autoIncremented=false,generated=false}, Struct{name=date_send,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=8,optional=false,autoIncremented=false,generated=false}, Struct{name=date_state,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=9,optional=true,autoIncremented=false,generated=false}, Struct{name=time_to_live,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=10,optional=false,autoIncremented=false,generated=false}, Struct{name=state_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=11,optional=false,autoIncremented=false,generated=false}, Struct{name=phone,jdbcType=12,typeName=varchar,typeExpression=varchar,length=32,position=12,optional=false,autoIncremented=false,generated=false}, Struct{name=target_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=13,optional=true,autoIncremented=false,generated=false}, Struct{name=provider_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=14,optional=false,autoIncremented=false,generated=false}, Struct{name=message,jdbcType=12,typeName=varchar,typeExpression=varchar,length=2147483647,position=15,optional=false,autoIncremented=false,generated=false}, Struct{name=linked_file,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=16,optional=true,autoIncremented=false,generated=false}, Struct{name=error_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=17,optional=true,autoIncremented=false,generated=false}, Struct{name=error_msg,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=18,optional=true,autoIncremented=false,generated=false}]}}]}, valueSchema=Schema{io.debezium.connector.sqlserver.SchemaChangeValue:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}'
at io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:116)
at io.debezium.transforms.Filter.doApply(Filter.java:33)
at io.debezium.transforms.ScriptingTransformation.apply(ScriptingTransformation.java:189)
at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:248)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.script.ScriptException: org.apache.kafka.connect.errors.DataException: op is not a valid field name
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320)
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:71)
at javax.script.CompiledScript.eval(CompiledScript.java:92)
at io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:107)
... 16 more
Caused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name
at org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)
at org.apache.kafka.connect.data.Struct.get(Struct.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetMethodMetaProperty.getProperty(MethodMetaProperty.java:62)
at org.codehaus.groovy.runtime.callsite.GetEffectivePojoPropertySite.getProperty(GetEffectivePojoPropertySite.java:63)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:329)
at Script1.run(Script1.groovy:1)
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
... 19 more
[2021-10-19 12:50:46,237] INFO WorkerSourceTask{id=DebeziumSMS-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:487)
[2021-10-19 12:50:46,237] ERROR WorkerSourceTask{id=DebeziumSMS-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:184)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:248)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.debezium.DebeziumException: Error while evaluating expression 'value.op == 'c'' for record 'SourceRecord{sourcePartition={server=server-test}, sourceOffset={transaction_id=null, event_serial_no=1, commit_lsn=00000064:0000420c:0003, change_lsn=00000064:0000420c:0002}} ConnectRecord{topic='server-test', kafkaPartition=0, key=Struct{databaseName=sms}, keySchema=Schema{io.debezium.connector.sqlserver.SchemaChangeKey:STRUCT}, value=Struct{source=Struct{version=1.5.0.Final,connector=sqlserver,name=server-test,ts_ms=1634637045699,db=,schema=,table=,change_lsn=00000064:0000420c:0002,commit_lsn=00000064:0000420c:0003},databaseName=sms,schemaName=dbo,ddl=N/A,tableChanges=[Struct{type=ALTER,id="sms"."dbo"."sms",table=Struct{primaryKeyColumnNames=[id],columns=[Struct{name=id,jdbcType=1,typeName=uniqueidentifier,typeExpression=uniqueidentifier,length=36,position=1,optional=false,autoIncremented=false,generated=false}, Struct{name=creator_login,jdbcType=12,typeName=varchar,typeExpression=varchar,length=64,position=2,optional=false,autoIncremented=false,generated=false}, Struct{name=system_ip_address,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=3,optional=false,autoIncremented=false,generated=false}, Struct{name=system_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=4,optional=false,autoIncremented=false,generated=false}, Struct{name=system_sub_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=5,optional=false,autoIncremented=false,generated=false}, Struct{name=type_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=6,optional=true,autoIncremented=false,generated=false}, Struct{name=date_create,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=7,optional=false,autoIncremented=false,generated=false}, Struct{name=date_send,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=8,optional=false,autoIncremented=false,generated=false}, Struct{name=date_state,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=9,optional=true,autoIncremented=false,generated=false}, Struct{name=time_to_live,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=10,optional=false,autoIncremented=false,generated=false}, Struct{name=state_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=11,optional=false,autoIncremented=false,generated=false}, Struct{name=phone,jdbcType=12,typeName=varchar,typeExpression=varchar,length=32,position=12,optional=false,autoIncremented=false,generated=false}, Struct{name=target_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=13,optional=true,autoIncremented=false,generated=false}, Struct{name=provider_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=14,optional=false,autoIncremented=false,generated=false}, Struct{name=message,jdbcType=12,typeName=varchar,typeExpression=varchar,length=2147483647,position=15,optional=false,autoIncremented=false,generated=false}, Struct{name=linked_file,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=16,optional=true,autoIncremented=false,generated=false}, Struct{name=error_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=17,optional=true,autoIncremented=false,generated=false}, Struct{name=error_msg,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=18,optional=true,autoIncremented=false,generated=false}]}}]}, valueSchema=Schema{io.debezium.connector.sqlserver.SchemaChangeValue:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}'
at io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:116)
at io.debezium.transforms.Filter.doApply(Filter.java:33)
at io.debezium.transforms.ScriptingTransformation.apply(ScriptingTransformation.java:189)
at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 11 more
Caused by: javax.script.ScriptException: org.apache.kafka.connect.errors.DataException: op is not a valid field name
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320)
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:71)
at javax.script.CompiledScript.eval(CompiledScript.java:92)
at io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:107)
... 16 more
Caused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name
at org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)
at org.apache.kafka.connect.data.Struct.get(Struct.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetMethodMetaProperty.getProperty(MethodMetaProperty.java:62)
at org.codehaus.groovy.runtime.callsite.GetEffectivePojoPropertySite.getProperty(GetEffectivePojoPropertySite.java:63)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:329)
at Script1.run(Script1.groovy:1)
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
... 19 more
[2021-10-19 12:50:46,237] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask:238)
After recreating Debezium Connector by DELETE and next POST, Debezium works fine a few days. But then error repeats, and connector stopps with this error.
And after reboot Kafka Connect (distributed) service this error again.
Trace for connector error by REST :
"trace":"org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\n\tat org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:248)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: io.debezium.DebeziumException: Error while evaluating expression 'value && value.op == 'c'' for record 'SourceRecord{sourcePartition={server=server-test}, sourceOffset={transaction_id=null, event_serial_no=1, commit_lsn=00000064:0000420c:0003, change_lsn=00000064:0000420c:0002}} ConnectRecord{topic='server-test', kafkaPartition=0, key=Struct{databaseName=sms}, keySchema=Schema{io.debezium.connector.sqlserver.SchemaChangeKey:STRUCT}, value=Struct{source=Struct{version=1.5.0.Final,connector=sqlserver,name=server-test,ts_ms=1634805618423,db=,schema=,table=,change_lsn=00000064:0000420c:0002,commit_lsn=00000064:0000420c:0003},databaseName=sms,schemaName=dbo,ddl=N/A,tableChanges=[Struct{type=ALTER,id=\"sms\".\"dbo\".\"sms\",table=Struct{primaryKeyColumnNames=[id],columns=[Struct{name=id,jdbcType=1,typeName=uniqueidentifier,typeExpression=uniqueidentifier,length=36,position=1,optional=false,autoIncremented=false,generated=false}, Struct{name=creator_login,jdbcType=12,typeName=varchar,typeExpression=varchar,length=64,position=2,optional=false,autoIncremented=false,generated=false}, Struct{name=system_ip_address,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=3,optional=false,autoIncremented=false,generated=false}, Struct{name=system_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=4,optional=false,autoIncremented=false,generated=false}, Struct{name=system_sub_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=5,optional=false,autoIncremented=false,generated=false}, Struct{name=type_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=6,optional=true,autoIncremented=false,generated=false}, Struct{name=date_create,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=7,optional=false,autoIncremented=false,generated=false}, Struct{name=date_send,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=8,optional=false,autoIncremented=false,generated=false}, Struct{name=date_state,jdbcType=93,typeName=datetime,typeExpression=datetime,length=23,scale=3,position=9,optional=true,autoIncremented=false,generated=false}, Struct{name=time_to_live,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=10,optional=false,autoIncremented=false,generated=false}, Struct{name=state_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=11,optional=false,autoIncremented=false,generated=false}, Struct{name=phone,jdbcType=12,typeName=varchar,typeExpression=varchar,length=32,position=12,optional=false,autoIncremented=false,generated=false}, Struct{name=target_id,jdbcType=12,typeName=varchar,typeExpression=varchar,length=128,position=13,optional=true,autoIncremented=false,generated=false}, Struct{name=provider_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=14,optional=false,autoIncremented=false,generated=false}, Struct{name=message,jdbcType=12,typeName=varchar,typeExpression=varchar,length=2147483647,position=15,optional=false,autoIncremented=false,generated=false}, Struct{name=linked_file,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=16,optional=true,autoIncremented=false,generated=false}, Struct{name=error_id,jdbcType=4,typeName=int,typeExpression=int,length=10,scale=0,position=17,optional=true,autoIncremented=false,generated=false}, Struct{name=error_msg,jdbcType=12,typeName=varchar,typeExpression=varchar,length=4000,position=18,optional=true,autoIncremented=false,generated=false}]}}]}, valueSchema=Schema{io.debezium.connector.sqlserver.SchemaChangeValue:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}'\n\tat io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:116)\n\tat io.debezium.transforms.Filter.doApply(Filter.java:33)\n\tat io.debezium.transforms.ScriptingTransformation.apply(ScriptingTransformation.java:189)\n\tat org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\t... 11 more\nCaused by: javax.script.ScriptException: org.apache.kafka.connect.errors.DataException: op is not a valid field name\n\tat org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320)\n\tat org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:71)\n\tat javax.script.CompiledScript.eval(CompiledScript.java:92)\n\tat io.debezium.transforms.scripting.Jsr223Engine.eval(Jsr223Engine.java:107)\n\t... 16 more\nCaused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name\n\tat org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)\n\tat org.apache.kafka.connect.data.Struct.get(Struct.java:74)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)\n\tat groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)\n\tat org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetMethodMetaProperty.getProperty(MethodMetaProperty.java:62)\n\tat org.codehaus.groovy.runtime.callsite.GetEffectivePojoPropertySite.getProperty(GetEffectivePojoPropertySite.java:63)\n\tat org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:329)\n\tat Script3.run(Script3.groovy:1)\n\tat org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)\n\t... 19 more\n"}
Caused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name
Why?
What can be wrong ? Thank you.

Debezium: RestClientException: Leader not known; error code: 50004

I am getting exceptions when i try to deploy debezium connector to kafka-connect. As a result, snapshots are not created and also cdc streaming is blocked. The problem is I am not able to find where the issue is i.e. is it in kafka-connect, schema-registry or debezium connector
{
"name": "my-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "root",
"database.port": "3306",
"database.user": "USER",
"database.password": "PASSWORD",
"database.server.id": "184056",
"database.server.name": "dbz",
"table.include.list": "T1,T2",
"snapshot.mode": "initial",
"database.history.kafka.bootstrap.servers": "xxx:9092",
"database.history.kafka.topic": "dbz.myhistory",
"plugin.path": "/usr/share/java/debezium-mysql-connect/",
"database.serverTimezone": "America/Los_Angeles",
"snapshot.locking.mode": "none",
"include.query": "true",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "XXX",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "XXX"
}
}
kafka-connect exception logs:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:247)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data from topic dbz :
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:91)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$1(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"SchemaChangeKey","namespace":"io.debezium.connector.mysql","fields":[{"name":"databaseName","type":"string"}],"connect.name":"io.debezium.connector.mysql.SchemaChangeKey"}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Leader not known.; error code: 50004
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:351)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:494)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:485)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:458)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:206)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:268)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:244)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:74)
at io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:143)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:84)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$1(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:247)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2021-09-20 05:29:23,849] ERROR WorkerSourceTask{id=my-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:187)
[2021-09-20 05:29:23,849] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask:241)
[2021-09-20 05:29:23,853] WARN Snapshot was interrupted before completion (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:74)
[2021-09-20 05:29:23,853] INFO Snapshot - Final stage (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:82)
[2021-09-20 05:29:23,854] WARN Change event source executor was interrupted (io.debezium.pipeline.ChangeEventSourceCoordinator:132)
java.lang.InterruptedException: Interrupted while processing event SchemaChangeEvent [database=mydb, schema=null, ddl=DROP TABLE IF EXISTS `db`.`t3`, tables=[], type=DROP]
at io.debezium.connector.mysql.MySqlSnapshotChangeEventSource.createSchemaChangeEventsForTables(MySqlSnapshotChangeEventSource.java:539)
at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:127)
at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:70)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Schema-Registry Exception error logs:
io.confluent.kafka.schemaregistry.rest.exceptions.RestUnknownLeaderException: Leader not known.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.unknownLeaderException(Errors.java:153)
at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:286)
at jdk.internal.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:386)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:561)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:502)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:439)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: io.confluent.kafka.schemaregistry.exceptions.UnknownLeaderException: Register schema request failed since leader is unknown
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.registerOrForward(KafkaSchemaRegistry.java:594)
at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:266)
... 60 more
Can someone help me what this exception means and how to resolve it?

How to resolve JsonParseException for InfluxDB sink connector

Here is json data in test topic [kafka]
This is a connector property
curl -i -X PUT -H "Content-Type:application/json" \
http://localhost:8089/connectors/influxdb-sink-test-20210615/config \
-d '{
"connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
"value.converter" : "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"topics" : "test3",
"influxdb.url" : "http://localhost:8086",
"influxdb.db" : "exemDB",
"measurement.name.format" : "${topic}"
}'
An exception is raised in the Kafka connector because the tags field is not converted properly. How can I properly convert the tags field?
Removing the tags field and producing data on the topic normally inserts data into the influxDB.
This is Exception..
If i remove the field tag and produce it, it works well on influxdb. How can I fill in the tags field?
[2021-06-17 09:24:59,521] ERROR WorkerSinkTask{id=influxdb-sink-tcore-test-20210615-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:513)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:513)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('t' (code 116)): was expecting comma to separate Object entries
at [Source: (byte[])"{"dataType":"numeric","parentResourceId":0,"resourceId":1037092,"resourceUUID":"1037092","resourceName":"tcore-master-01","resourceTypeId":5,"resourceTypeName":"물리서버","resourceLocation":"미지정","ip":"localhost","tags":"[{"tagId":0,"tagValueId":0,"tag":"미지정","value":"미지정"},{"tagId":7,"tagValueId":2103044,"tag":"물리서버","value":"tcore-master-01"},{"tagId":52,"tagValueId":2103045,"tag":"미지정","value":"tcore-master01"}]","metricSeq":930,"metricName":"command_"[truncated 279 bytes]; line: 1, column: 242]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('t' (code 116)): was expecting comma to separate Object entries
at [Source: (byte[])"{"dataType":"numeric","parentResourceId":0,"resourceId":1037092,"resourceUUID":"1037092","resourceName":"tcore-master-01","resourceTypeId":5,"resourceTypeName":"물리서버","resourceLocation":"미지정","ip":"localhost","tags":"[{"tagId":0,"tagValueId":0,"tag":"미지정","value":"미지정"},{"tagId":7,"tagValueId":2103044,"tag":"물리서버","value":"tcore-master-01"},{"tagId":52,"tagValueId":2103045,"tag":"미지정","value":"tcore-master01"}]","metricSeq":930,"metricName":"command_"[truncated 279 bytes]; line: 1, column: 242]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:669)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:567)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextFieldName(UTF8StreamJsonParser.java:980)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:247)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:68)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:15)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4056)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2571)
at org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:50)
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:332)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:513)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:513)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2021-06-17 09:24:59,522] ERROR WorkerSinkTask{id=influxdb-sink-tcore-test-20210615-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
[2021-06-17 09:24:59,522] INFO Closing InfluxDB client. (io.confluent.influxdb.sink.writer.InfluxDBWriter:406)

Could not initialize class io.debezium.connector.oracle.OracleConnectorConfig

Set-up/Configuration
In case, it's needed, it's at this SO question.
Issue
localhost:8083 is working fine as i am getting:
{"version":"2.6.0","commit":"62abe01bee039651","kafka_cluster_id":"k6c8D2yvR5OcVFMVZayP9A"}
But when i post a connector configuration to localhost:8083/connectors, i am getting a 500 Server Error. I am not posting the json body as it's not relevant.
Error
WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException:
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:547)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.lang.Thread.run(Thread.java:748) Caused by: org.glassfish.jersey.server.ContainerException:
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 30 more Caused by: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at io.debezium.relational.HistorizedRelationalDatabaseConnectorConfig.(HistorizedRelationalDatabaseConnectorConfig.java:52)
at io.debezium.connector.oracle.OracleConnector.config(OracleConnector.java:51)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:366)
at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more Caused by: java.lang.ClassNotFoundException: io.debezium.DebeziumException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 9 more [2020-09-23 17:51:57,186] WARN unhandled due to prior sendError (org.eclipse.jetty.server.HttpChannelState)
javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException:
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:547)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.lang.Thread.run(Thread.java:748) Caused by: org.glassfish.jersey.server.ContainerException:
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 30 more Caused by: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at io.debezium.relational.HistorizedRelationalDatabaseConnectorConfig.(HistorizedRelationalDatabaseConnectorConfig.java:52)
at io.debezium.connector.oracle.OracleConnector.config(OracleConnector.java:51)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:366)
at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more Caused by: java.lang.ClassNotFoundException: io.debezium.DebeziumException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 9 more
instantclient
echo %PATH% | findstr instantclient
XXX;C:\Users\username\Downloads\instantclient_19_8;XXX
Any pointers would be much appreciated.
Thanks!
Issue solved, i had to copy all the jars found in the plug-in archive here.

Spark Code works for 1000 document but as it is increased to 1200 or more it fails with None.get?

I am developing a application, where i have to read multiple files from HDFS and then process them and save the result in the Cassandra Table.
This is my pseudo code !
val files = sc.wholeTextFiles(s"hdfs://$ipaddress:9000/xhtml/2016/09/*").map(_._1).take(1000)
val fileNameRDD = sc.parallelize(files)
Here i am extracting the path of 1000 documents and then pass into a function that takes path, reads the document , perform the operation and return a case class.
Hence this function is like :
def doSomething(path:String):Foo={...}
What my biggest concern is the code works fine for 1000 documents ! But as soon as I increase it to 1200 or 1500 it fails with the following exception:
[Stage 2:=============================> (6 + 6) / 12]16/12/06 11:09:48 WARN TaskSetManager: Lost task 10.0 in stage 2.0 (TID 12, 10.178.149.243): java.io.IOException: Failed to write statements to elsevier.rnf.
at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:167)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:135)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)
at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:140)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:110)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:135)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/12/06 11:09:48 WARN TaskSetManager: Lost task 10.1 in stage 2.0 (TID 14, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/12/06 11:09:48 ERROR TaskSetManager: Task 10 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in stage 2.0 failed 4 times, most recent failure: Lost task 10.3 in stage 2.0 (TID 16, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1904)
at com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:37)
at com.knoldus.xml.RNF2Driver$.main(RNFIngestPipeline.scala:38)
at com.knoldus.xml.RNF2Driver.main(RNFIngestPipeline.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/12/06 11:09:48 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
When I try to do the Show it displays my document path correctly !
Is there some setting that i am missing ???
I am using Spark 1.6 ! Any help is appreciated !