Async Spring Batch job fails to process file - spring-batch

I'm trying to process a file, and upload it into a database, using spring batch, right after uploading it. However the job completes right after it's started, and I'm not too sure of the exact reason. I think it's not doing what it should, in the tasklet.execute. Below is the DEBUG output:
22:25:09.823 [http-nio-127.0.0.1-8080-exec-2] DEBUG o.s.b.c.c.a.SimpleBatchConfiguration$ReferenceTargetSource - Initializing lazy target object
22:25:09.912 [SimpleAsyncTaskExecutor-1] INFO o.s.b.c.l.support.SimpleJobLauncher - Job: [FlowJob: [name=moneyTransactionImport]] launched with the following parameters: [{targetFile=C:\Users\test\AppData\Local\Temp\tomcat.1435325122308787143.8080\uploads\test.csv}]
22:25:09.912 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.job.AbstractJob - Job execution starting: JobExecution: id=95, version=0, startTime=null, endTime=null, lastUpdated=Tue Sep 16 22:25:09 BST 2014, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=52, version=0, Job=[transactionImport]], jobParameters=[{targetFile=C:\Users\test\AppData\Local\Temp\tomcat.1435325122308787143.8080\uploads\test.csv}]
22:25:09.971 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.job.flow.support.SimpleFlow - Resuming state=transactionImport.step with status=UNKNOWN
22:25:09.972 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.job.flow.support.SimpleFlow - Handling state=transactionImport.step
22:25:10.018 [SimpleAsyncTaskExecutor-1] INFO o.s.batch.core.job.SimpleStepHandler - Executing step: [step]
22:25:10.019 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.step.AbstractStep - Executing: id=93
22:25:10.072 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.scope.StepScope - Creating object in scope=step, name=scopedTarget.reader
22:25:10.117 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.scope.StepScope - Registered destruction callback in scope=step, name=scopedTarget.reader
22:25:10.136 [SimpleAsyncTaskExecutor-1] WARN o.s.b.item.file.FlatFileItemReader - Input resource does not exist class path resource [C:/Users/test/AppData/Local/Temp/tomcat.1435325122308787143.8080/uploads/test.csv]
22:25:10.180 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Starting repeat context.
22:25:10.181 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=1
22:25:10.181 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.s.c.StepContextRepeatCallback - Preparing chunk execution for StepContext: org.springframework.batch.core.scope.context.StepContext#5d85b879
22:25:10.181 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.s.c.StepContextRepeatCallback - Chunk execution starting: queue size=0
22:25:12.333 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Starting repeat context.
22:25:12.333 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=1
22:25:12.334 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat is complete according to policy and result value.
22:25:12.334 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.s.item.ChunkOrientedTasklet - Inputs not busy, ended: true
22:25:12.334 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.core.step.tasklet.TaskletStep - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING]
22:25:12.337 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.core.step.tasklet.TaskletStep - Saving step execution before commit: StepExecution: id=93, version=1, name=step, status=STARTED, exitStatus=EXECUTING, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0, exitDescription=
22:25:12.358 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat is complete according to policy and result value.
22:25:12.358 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.step.AbstractStep - Step execution success: id=93
22:25:12.419 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.step.AbstractStep - Step execution complete: StepExecution: id=93, version=3, name=step, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0
22:25:12.442 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.job.flow.support.SimpleFlow - Completed state=transactionImport.step with status=COMPLETED
22:25:12.443 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.job.flow.support.SimpleFlow - Handling state=transactionImport.COMPLETED
22:25:12.443 [SimpleAsyncTaskExecutor-1] DEBUG o.s.b.c.job.flow.support.SimpleFlow - Completed state=transactionImport.COMPLETED with status=COMPLETED
22:25:12.445 [SimpleAsyncTaskExecutor-1] DEBUG o.s.batch.core.job.AbstractJob - Job execution complete: JobExecution: id=95, version=1, startTime=Tue Sep 16 22:25:09 BST 2014, endTime=null, lastUpdated=Tue Sep 16 22:25:09 BST 2014, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=52, version=0, Job=[transactionImport]], jobParameters=[{targetFile=C:\Users\test\AppData\Local\Temp\tomcat.1435325122308787143.8080\uploads\test.csv}]
22:25:12.466 [SimpleAsyncTaskExecutor-1] INFO o.s.b.c.l.support.SimpleJobLauncher - Job: [FlowJob: [name=transactionImport]] completed with the following parameters: [{targetFile=C:\Users\test\AppData\Local\Temp\tomcat.1435325122308787143.8080\uploads\test.csv}] and the following status: [COMPLETED]
My config is as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Inject
private TransactionRepository transactionRepository;
#Inject
private JobRepository jobRepository;
#Bean
#StepScope
public FlatFileItemReader<MoneyTransaction> reader(#Value("#{jobParameters[targetFile]}") String file) {
FlatFileItemReader<MoneyTransaction> reader = new FlatFileItemReader<>();
reader.setResource(new ClassPathResource(file));
reader.setLineMapper(new DefaultLineMapper<MoneyTransaction>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[]{"Number", "Date", "Account", "Payee", "Cleared", "Amount", "Category", "Subcategory", "Memo"});
}
}
);
setFieldSetMapper(new BeanWrapperFieldSetMapper<MoneyTransaction>() {
{
setTargetType(MoneyTransaction.class);
}
});
}
}
);
reader.setStrict(false);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<MoneyTransaction, Transaction> processor() {
return new TransactionProcessor();
}
#Bean
public RepositoryItemWriter writer() {
RepositoryItemWriter writer = new RepositoryItemWriter();
writer.setRepository(transactionRepository);
writer.setMethodName("save");
return writer;
}
#Bean
public Step step(StepBuilderFactory stepBuilderFactory, ItemReader<MoneyTransaction> reader,
ItemWriter<Transaction> writer, ItemProcessor<MoneyTransaction, Transaction> processor) {
return stepBuilderFactory.get("step")
.<MoneyTransaction, Transaction>chunk(100)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public SimpleAsyncTaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor executor = new SimpleAsyncTaskExecutor();
executor.setConcurrencyLimit(1);
return executor;
}
#Bean
public SimpleJobLauncher jobLauncher() {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
jobLauncher.setTaskExecutor(taskExecutor());
return jobLauncher;
}
}
And I save the file, and start processing in the following way:
public JobExecution processFile(String name, MultipartFile file) {
if (!file.isEmpty()) {
try {
byte[] bytes = file.getBytes();
String rootPath = System.getProperty("catalina.home");
File uploadDirectory = new File(rootPath.concat(File.separator).concat("uploads"));
if (!uploadDirectory.exists()) {
uploadDirectory.mkdirs();
}
File uploadFile = new File(uploadDirectory.getAbsolutePath() + File.separator + file.getOriginalFilename());
BufferedOutputStream stream =
new BufferedOutputStream(new FileOutputStream(uploadFile));
stream.write(bytes);
stream.close();
return startImportJob(uploadFile, "transactionImport");
} catch (Exception e) {
logger.error(String.format("Error processing file '%s'.", name), e);
throw new MoneyException(e);
}
} else {
throw new MoneyException("There was no file to process.");
}
}
/**
* #param file
*/
private JobExecution startImportJob(File file, String jobName) {
logger.debug(String.format("Starting job to import file '%s'.", file));
try {
Job job = jobs.get(jobName).incrementer(new MoneyRunIdIncrementer()).flow(step).end().build();
return jobLauncher.run(job, new JobParametersBuilder().addString("targetFile", file.getAbsolutePath()).toJobParameters());
} catch (JobExecutionAlreadyRunningException e) {
logger.error(String.format("Job for processing file '%s' is already running.", file), e);
throw new MoneyException(e);
} catch (JobParametersInvalidException e) {
logger.error(String.format("Invalid parameters for processing of file '%s'.", file), e);
throw new MoneyException(e);
} catch (JobRestartException e) {
logger.error(String.format("Error restarting job, for processing file '%s'.", file), e);
throw new MoneyException(e);
} catch (JobInstanceAlreadyCompleteException e) {
logger.error(String.format("Job to process file '%s' has already completed.", file), e);
throw new MoneyException(e);
}
}
I'm kind of stumped at the minute, and any help would be greatly received.
Thanks.

Found the issue. The problem was with the type of resource ClassPathResource(file), combined with the fact that I was setting the strict property to false.
reader.setResource(new ClassPathResource(file));
I should have used
reader.setResource(new FileSystemResource(file));
Which makes complete sense, as I wasn't uploading the file as a class path resource.

Related

While using Thread sleep on ParallelFlux its not waiting for sleep thread to get completed and executing onComplete() function

While using Parallel on Flux i am stopping thread for some time using thread sleep, but the problem is that flux not waiting till thread sleep time and executed on onComplete on subscribe.
List str = new ArrayList<>();
str.add("spring");
str.add("webflux");
str.add("example");
AtomicInteger num = new AtomicInteger();
ParallelFlux<Object> names = Flux.fromIterable(str)
.log()
.parallel(2)
.runOn(Schedulers.boundedElastic())
.map( s-> {
if(s.equalsIgnoreCase("webflux")) {
try {
System.out.println("waiting...");
Thread.sleep(1000);
System.out.println("done...");
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return s+" "+num.incrementAndGet();
});
names.subscribe(s -> {
System.out.println("value "+s+" thread : "+Thread.currentThread().getName());
});
Output:
19:35:24.870 [main] INFO reactor.Flux.Iterable.1 - | onSubscribe([Synchronous Fuseable] FluxIterable.IterableSubscription)
19:35:24.896 [main] INFO reactor.Flux.Iterable.1 - | request(256)
19:35:24.897 [main] INFO reactor.Flux.Iterable.1 - | onNext(spring)
19:35:24.898 [main] INFO reactor.Flux.Iterable.1 - | onNext(webflux)
19:35:24.898 [main] INFO reactor.Flux.Iterable.1 - | onNext(example)
waiting...
value spring 1 thread : boundedElastic-1
value example 2 thread : boundedElastic-1
19:35:24.899 [main] INFO reactor.Flux.Iterable.1 - | onComplete()

How to handle UnkownProducerIdException

We are having some troubles with Spring Cloud and Kafka, at sometimes our microservice throws an UnkownProducerIdException, this is caused if the parameter transactional.id.expiration.ms is expired in the broker side.
My question, could it be possible to catch that exception and retry the failed message? If yes, what could be the best option to handle it?
I have took a look at:
- https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89068820
- Kafka UNKNOWN_PRODUCER_ID exception
We are using Spring Cloud Hoxton.RELEASE version and Spring Kafka version 2.2.4.RELEASE
We are using AWS Kafka solution so we can't set a new value on that property I mentioned before.
Here is some trace of the exception:
2020-04-07 20:54:00.563 ERROR 5188 --- [ad | producer-2] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-2] The broker returned org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. for topic-partition test.produce.another-2 with producerId 35000, epoch 0, and sequence number 8
2020-04-07 20:54:00.563 INFO 5188 --- [ad | producer-2] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-2] ProducerId set to -1 with epoch -1
2020-04-07 20:54:00.565 ERROR 5188 --- [ad | producer-2] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{...}' to topic <some-topic>:
To reproduce this exception:
- I have used the confluent docker images and set the environment variable KAFKA_TRANSACTIONAL_ID_EXPIRATION_MS to 10 seconds so I wouldn't wait too much for this exception to be thrown.
- In another process, send one by one in interval of 10 seconds 1 message in the topic the java will listen.
Here is a code example:
File Bindings.java
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
public interface Bindings {
#Input("test-input")
SubscribableChannel testListener();
#Output("test-output")
MessageChannel testProducer();
}
File application.yml (don't forget to set the environment variable KAFKA_HOST):
spring:
cloud:
stream:
kafka:
binder:
auto-create-topics: true
brokers: ${KAFKA_HOST}
transaction:
producer:
error-channel-enabled: true
producer-properties:
acks: all
retry.backoff.ms: 200
linger.ms: 100
max.in.flight.requests.per.connection: 1
enable.idempotence: true
retries: 3
compression.type: snappy
request.timeout.ms: 5000
key.serializer: org.apache.kafka.common.serialization.StringSerializer
consumer-properties:
session.timeout.ms: 20000
max.poll.interval.ms: 350000
enable.auto.commit: true
allow.auto.create.topics: true
auto.commit.interval.ms: 12000
max.poll.records: 5
isolation.level: read_committed
configuration:
auto.offset.reset: latest
bindings:
test-input:
# contentType: text/plain
destination: test.produce
group: group-input
consumer:
maxAttempts: 3
startOffset: latest
autoCommitOnError: true
queueBufferingMaxMessages: 100000
autoCommitOffset: true
test-output:
# contentType: text/plain
destination: test.produce.another
group: group-output
producer:
acks: all
debug: true
The listener handler:
#SpringBootApplication
#EnableBinding(Bindings.class)
public class PocApplication {
private static final Logger log = LoggerFactory.getLogger(PocApplication.class);
public static void main(String[] args) {
SpringApplication.run(PocApplication.class, args);
}
#Autowired
private BinderAwareChannelResolver binderAwareChannelResolver;
#StreamListener(Topics.TESTLISTENINPUT)
public void listen(Message<?> in, String headerKey) {
final MessageBuilder builder;
MessageChannel messageChannel;
messageChannel = this.binderAwareChannelResolver.resolveDestination("test-output");
Object payload = in.getPayload();
builder = MessageBuilder.withPayload(payload);
try {
log.info("Event received: {}", in);
if (!messageChannel.send(builder.build())) {
log.error("Something happend trying send the message! {}", in.getPayload());
}
log.info("Commit success");
} catch (UnknownProducerIdException e) {
log.error("UnkownProducerIdException catched ", e);
} catch (KafkaException e) {
log.error("KafkaException catched ", e);
}catch (Exception e) {
System.out.println("Commit failed " + e.getMessage());
}
}
}
Regards
} catch (UnknownProducerIdException e) {
log.error("UnkownProducerIdException catched ", e);
To catch exceptions there, you need to set the sync kafka producer property (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.3.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#kafka-producer-properties). Otherwise, the error comes back asynchronously
You should not "eat" the exception there; it must be thrown back to the container so the container will roll back the transaction.
Also,
}catch (Exception e) {
System.out.println("Commit failed " + e.getMessage());
}
The commit is performed by the container after the stream listener returns to the container so you will never see a commit error here; again, you must let the exception propagate back to the container.
The container will retry the delivery according to the consumer binding's retry configuration.
probably you can also use the callback function to handle the exception, not sure about the springframework lib for kafka, if using kafka client, you can something like this:
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
if(e.getClass().equals(UnknownProducerIdException.class)) {
logger.info("UnknownProducerIdException caught");
while(--retry>=0) {
send(topic,partition,msg);
}
}
} else {
logger.info("The offset of the record we just sent is: " + metadata.offset());
}
}
});

When TaskExecutor concurrencyLimit is less than the number of flow steps, job will be blocked

I'm using Spring Batch 4.1.2.RELEASE. I have a problem about Parallel Steps
(using Split Flow). When SplitFlow's concurrencyLimit (or ThreadPoolTaskExecutor.corePoolSize) is less than the number of A split flow's steps. the job never stops, and no Exception thrown.
I know the solution is to increase the concurrencyLimit or decrease the number of steps in each flow. But I want to make sure whether there is a problem with job's TaskExecutor and task's TaskExecutor or my code is wrong.
Without consideration of SplitFlow, I found that if if the number of Jobs (as simple as possible) submitted to jobLauncher is more than its TaskExecutor.corePoolSize(assume 1), the job will be executed one by one. This is the expected result.
#Bean
public TaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor executor = new SimpleAsyncTaskExecutor("tsk-Exec-");
executor.setConcurrencyLimit(2);
return executor;
}
#SpringBootApplication
#EnableBatchProcessing
#EnableWebMvc
public class BatchJobApplication {
public static void main(String[] args) {
SpringApplication.run(BatchJobApplication.class, args);
}
}
The code below create a single job, contains a split flow with 4 tasklet step.
#Autowired
private TaskExecutor taskExecutor;
public JobExecution experiment(Integer flowId) {
String dateFormat = LocalDate.now(ZoneId.of("+8")).format(DateTimeFormatter.BASIC_ISO_DATE);
JobBuilder job1 = this.jobBuilderFactory.get("Job_" + flowId + "_" + dateFormat);
List<TaskletStep> taskletSteps = Lists.newArrayList();
for (int i = 0; i < 4; i++) {
taskletSteps.add(this.stepBuilderFactory.get("step:" + i).tasklet(
(contribution, chunkContext) -> {
Thread.sleep(3000);
return RepeatStatus.FINISHED;
}).build());
}
JobExecution run = null;
FlowBuilder.SplitBuilder<SimpleFlow> splitFlow = new FlowBuilder<SimpleFlow>("splitFlow").split(taskExecutor);
FlowBuilder<SimpleFlow> lastFlowNode = null;
for (TaskletStep taskletStep : taskletSteps) {
SimpleFlow singleNode = new FlowBuilder<SimpleFlow>("async-fw-" + taskletStep.getName()).start(taskletStep).build();
lastFlowNode = splitFlow.add(singleNode);
}
Job build = job1.start(lastFlowNode.end()).build().build();
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
jobParametersBuilder.addDate("parameterGenerated", new Date());
try {
run = jobLauncher.run(build, jobParametersBuilder.toJobParameters());
} catch (JobExecutionAlreadyRunningException e) {
e.printStackTrace();
} catch (JobRestartException e) {
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
e.printStackTrace();
} catch (JobParametersInvalidException e) {
e.printStackTrace();
}
return run;
}
Now It's blocked.
2019-07-29 18:08:10.321 INFO 24416 --- [ job-Exec-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=Job_2124_20190729]] launched with the following parameters: [{parameterGenerated=1564394890193}]
2019-07-29 18:08:13.392 DEBUG 24416 --- [ job-Exec-1] cTaskExecutor$ConcurrencyThrottleAdapter : Entering throttle at concurrency count 0
2019-07-29 18:08:13.393 DEBUG 24416 --- [ job-Exec-1] cTaskExecutor$ConcurrencyThrottleAdapter : Entering throttle at concurrency count 1
2019-07-29 18:08:13.393 DEBUG 24416 --- [ tsk-Exec-2] cTaskExecutor$ConcurrencyThrottleAdapter : Concurrency count 2 has reached limit 2 - blocking
2019-07-29 18:08:13.425 INFO 24416 --- [ tsk-Exec-1] o.s.batch.core.job.SimpleStepHandler : Executing step: [step:3]
2019-07-29 18:08:16.466 DEBUG 24416 --- [ tsk-Exec-1] cTaskExecutor$ConcurrencyThrottleAdapter : Returning from throttle at concurrency count 1
2019-07-29 18:08:16.466 DEBUG 24416 --- [ tsk-Exec-2] cTaskExecutor$ConcurrencyThrottleAdapter : Entering throttle at concurrency count 1
2019-07-29 18:08:16.466 DEBUG 24416 --- [ tsk-Exec-2] cTaskExecutor$ConcurrencyThrottleAdapter : Concurrency count 2 has reached limit 2 - blocking
2019-07-29 18:08:16.484 INFO 24416 --- [ tsk-Exec-3] o.s.batch.core.job.SimpleStepHandler : Executing step: [step:2]
2019-07-29 18:08:19.505 DEBUG 24416 --- [ tsk-Exec-3] cTaskExecutor$ConcurrencyThrottleAdapter : Returning from throttle at concurrency count 1
2019-07-29 18:08:19.505 DEBUG 24416 --- [ tsk-Exec-2] cTaskExecutor$ConcurrencyThrottleAdapter : Entering throttle at concurrency count 1
2019-07-29 18:08:19.506 DEBUG 24416 --- [ tsk-Exec-4] cTaskExecutor$ConcurrencyThrottleAdapter : Concurrency count 2 has reached limit 2 - blocking
I think I have found the answer yesterday evening.
When The TaskExecutor in SplitFlow with a few concurrencyLimit or ThreadPoolTaskExecutor.corePoolSize. According to the code, It's very likely to happen that All the Thread is blocked by future.get(), but no Thread available has chance to run taskletStep.
//SplitState.java:114
results.add(task.get());
In addition, the Threads created by TaskExecutor in JobLauncher doesn't have to wait results of future. So TaskExcutor always have enough free Thread to accept jobs, no one need to wait any condition.

How to load a drools rule externally

I have a drl file and a .class file in a folder. drl contains rule which is built on class attributes. Now through a java program i want to invoke this rule on some input. I am clueless here . please look at code below
class file
import java.io.Serializable;
public class Txn754909164
implements Serializable
{
String sequenceNo;
String accountNumber;
String customerNumber;
// setter and getters
}
drl file
import Txn754909164;
import java.util.*;
dialect "mvel"
rule "rule6"
when
txn:Txn754909164(sequence == 10)
then
System.out.println( "invoking rule ***********************" );
end
client code
public KieContainer kieContainer(String packageName) {
KieServices kieServices = KieServices.Factory.get();
KieFileSystem kieFileSystem = kieServices.newKieFileSystem();
kieFileSystem.write(ResourceFactory.newUrlResource("drl resource url..."));
KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem,MyInputClass.getClassLoader());
KieModule kieModule = null;
try {
kieBuilder.buildAll();
kieModule = kieBuilder.getKieModule();
} catch (Exception e) {
e.printStackTrace();
}
return kieServices.newKieContainer(kieServices.getRepository().getDefaultReleaseId(),cls.getClassLoader());
}
and finally
StatelessKieSession kieSession = container.getKieBase().newStatelessKieSession();
kieSession.execute(obj);
logs
11:47:34.795 [http-nio-8282-exec-4] INFO o.d.c.k.b.impl.KieRepositoryImpl - KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
11:47:34.803 [http-nio-8282-exec-4] TRACE org.drools.core.phreak.AddRemoveRule - Adding Rule rule6
11:47:45.994 [AsyncResolver-bootstrap-executor-0] INFO c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka endpoints via configuration
11:47:49.899 [http-nio-8282-exec-4] TRACE o.drools.core.reteoo.EntryPointNode - Insert [fact 0:1:1764329060:1764329060:1:DEFAULT:NON_TRAIT:Txn754909164:Txn754909164#69298664]
11:47:52.953 [http-nio-8282-exec-4] INFO o.k.a.e.r.DebugRuleRuntimeEventListener - ==>[ObjectInsertedEventImpl: getFactHandle()=[fact 0:1:1764329060:1764329060:1:DEFAULT:NON_TRAIT:Txn754909164:Txn754909164#69298664], getObject()=Txn754909164#69298664, getKnowledgeRuntime()=KieSession[0], getPropagationContext()=PhreakPropagationContext [entryPoint=EntryPoint::DEFAULT, factHandle=[fact 0:1:1764329060:1764329060:1:DEFAULT:NON_TRAIT:Txn754909164:Txn754909164#69298664], leftTuple=null, originOffset=-1, propagationNumber=2, rule=null, type=INSERTION]]
11:48:41.571 [http-nio-8282-exec-4] DEBUG org.drools.core.common.DefaultAgenda - State was INACTIVE is now FIRING_ALL_RULES
11:48:41.572 [http-nio-8282-exec-4] TRACE org.drools.core.common.DefaultAgenda - Starting Fire All Rules
11:48:41.573 [http-nio-8282-exec-4] DEBUG org.drools.core.common.DefaultAgenda - State was FIRING_ALL_RULES is now HALTING
11:48:41.573 [http-nio-8282-exec-4] DEBUG org.drools.core.common.DefaultAgenda - State was HALTING is now INACTIVE
11:48:41.574 [http-nio-8282-exec-4] TRACE org.drools.core.common.DefaultAgenda - Ending Fire All Rules
11:48:41.575 [http-nio-8282-exec-4] DEBUG org.drools.core.common.DefaultAgenda - State was INACTIVE is now DISPOSED
It should print statement in then clause of drl rule
above mentioned question and explanation is answer in itself. Found it working perfectly fine from next day. I guess it was just a workspace issue.

JMeter - XMPP Authentication

I am building a test plan to test XMPP with JMeter. But I always meet an error when I send a authentication string to server even the authentication string is correct. Does anybody have the same problem or know how to fix this issue? Thanks.
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2014/07/04 10:23:22 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group Thread Group.
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: Thread will continue on error
2014/07/04 10:23:22 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2014/07/04 10:23:22 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2014/07/04 10:23:22 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2014/07/04 10:23:22 ERROR - ru.yandex.jmeter.XMPPClientImpl: Error reading data java.lang.RuntimeException: Retries more than 1000, aborting read
at ru.yandex.jmeter.XMPPClientImpl.read(XMPPClientImpl.java:116)
at org.apache.jmeter.protocol.tcp.sampler.TCPSampler.sample(TCPSampler.java:414)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:428)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.lang.Thread.run(Unknown Source)
2014/07/04 10:23:22 ERROR - jmeter.protocol.tcp.sampler.TCPSampler: java.lang.RuntimeException: Error reading data
at ru.yandex.jmeter.XMPPClientImpl.read(XMPPClientImpl.java:152)
at org.apache.jmeter.protocol.tcp.sampler.TCPSampler.sample(TCPSampler.java:414)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:428)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: Retries more than 1000, aborting read
at ru.yandex.jmeter.XMPPClientImpl.read(XMPPClientImpl.java:116)
... 4 more
2014/07/04 10:23:22 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-1
2014/07/04 10:23:22 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2014/07/04 10:23:22 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
I had tried using XMPPClientImpl plugin but always got the same error ("Retries more than 1000, aborting read"), so I decided to leave it and write my own code.
I use a BeanShell Sampler in which I run the following code (using smack library) to connect to XMPP server.
String CLASS_PATH = "C:/JMeter/apache-jmeter-2.13/lib/ext/smack/";
addClassPath(CLASS_PATH + "smack-android-extensions-4.1.3.jar");
addClassPath(CLASS_PATH + "smack-tcp-4.1.3.jar");
addClassPath(CLASS_PATH + "smack-android-4.1.3.jar");
// explicitly import every class you need
import org.jivesoftware.smack.XMPPConnection;
import org.jivesoftware.smack.ConnectionConfiguration;
import org.jivesoftware.smack.ConnectionListener;
import org.jivesoftware.smack.tcp.XMPPTCPConnection;
import org.jivesoftware.smack.tcp.XMPPTCPConnectionConfiguration;
import org.jivesoftware.smack.SmackException;
import org.jivesoftware.smack.XMPPException;
import org.jivesoftware.smack.XMPPException.XMPPErrorException;
String jabberId = "...";
String jabberPass = "...";
String SERVER_ADDRESS = "...";
int PORT = 5222; // or any other port
XMPPTCPConnection getConnection() {
XMPPTCPConnectionConfiguration config =
XMPPTCPConnectionConfiguration.builder()
.setUsernameAndPassword(jabberId, jabberPass)
.setHost(SERVER_ADDRESS)
.setServiceName(SERVER_ADDRESS)
.setPort(DEFAULT_PORT)
// .setSecurityMode(ConnectionConfiguration.SecurityMode.disabled)
.setSendPresence(true)
// .setDebuggerEnabled(YouMe.DEBUG)
.build();
XMPPTCPConnection con = new XMPPTCPConnection(config);
int REPLY_TIMEOUT = 50000; // 50 seconds, but can be shorter
con.setPacketReplyTimeout(REPLY_TIMEOUT);
return con;
}
Don't forget to add the smack path (e.g. C:\JMeter\apache-jmeter-2.13\lib\ext\smack) to the library field (under "Add directory or jar classpath") in the Test Plan node of your test plan.
To connect -
con = getConnection();
con.connect();
To login -
con.login(jabberId, jabberPass);
You can also add connection listener -
ConnectionListener listener = new ConnectionListener() {
public void connected(XMPPConnection xmppConnection) {
// run main code incl. the login code
runMain();
}
public void authenticated(XMPPConnection xmppConnection, boolean resumed) {
}
public void connectionClosed() {
}
public void connectionClosedOnError(Exception e) {
}
public void reconnectingIn(int i) {
}
public void reconnectionSuccessful() {
}
public void reconnectionFailed(Exception e) {
}
};
con.addConnectionListener(listener);
// connect
con.connect();
runMain() {
con.login(jabberId, jabberPass);
// ...
}