I have 2 TestNG testcase annotated with #Test. A methods have a return type of String, which is as well a testcase. The other one uses the output param of the first. when I ran both test, TestNG only showed that only one ran instead of 2.
public class Login {
private static String INITIATE = "https://login.endpoint.com/initiate";
private static String COMPLETE = "https://login.endpoint.com/complete";
#SuppressWarnings("unchecked")
#Test(groups = "middleware", priority = 1)
public String InitiateLogin() throws FileNotFoundException, UnsupportedEncodingException {
RequestSpecification request = RestAssured.given();
request.header("Content-Type", "application/json");
JSONObject json = new JSONObject();
json.put("email", "test#test.com");
json.put("password", "111111");
request.body(json.toJSONString());
Response response = request.post(INITIATE);
String OTP = response.path("OTP");
if(OTP.matches("[0-9]{4}")) {
response.then().body(
"OTP", equalTo(OTP));
}
return OTP;
}
#SuppressWarnings("unchecked")
#Test(groups = "middleware", priority = 2)
public void CompleteLogin() throws FileNotFoundException, UnsupportedEncodingException {
RequestSpecification completeRequest = RestAssured.given();
completeRequest.header("Content-Type", "application/json");
JSONObject completeJson = new JSONObject();
completeJson.put("Otp", InitiateDeviceRelease());
completeRequest.body(completeJson.toJSONString());
Response completeResponse = completeRequest.post(COMPLETE);
completeResponse.then().body(
"SessionToken", equalTo("ewrtw4456765v543fw3v"));
}
}
This is the output of the test. It suppose to show that 2 testcases ran, but it obly showed that only one ran. Is it because the first test have a return type and not void? What way can I make testng see that they are 2 testcases?
{
"OTP": "6645"
}
PASSED: CompleteLogin
===============================================
Default test
Tests run: 1, Failures: 0, Skips: 0
===============================================
===============================================
Default suite
Total tests run: 1, Failures: 0, Skips: 0
===============================================
#Test method cannot have return type, it should be void always.
try changing return type of InitiateLogin() method to void, it should work.
Related
I have implemented a kafka application using consumer api. And I have 2 regression tests implemented with stream api:
To test happy path: by producing data from the test ( into the input topic that the application is listening to) that will be consumed by the application and application will produce data (into the output topic ) that the test will consume and validate against expected output data.
To test error path: behavior is the same as above. Although this time application will produce data into output topic and test will consume from application's error topic and will validate against expected error output.
My code and the regression-test codes are residing under the same project under expected directory structure. Both time ( for both tests) data should have been picked up by the same listener at the application side.
The problem is :
When I am executing the tests individually (manually), each test is passing. However, If I execute them together but sequentially ( for example: gradle clean build ) , only first test is passing. 2nd test is failing after the test-side-consumer polling for data and after some time it gives up not finding any data.
Observation:
From debugging, it looks like, the 1st time everything works perfectly ( test-side and application-side producers and consumers). However, during the 2nd test it seems that application-side-consumer is not receiving any data ( It seems that test-side-producer is producing data, but can not say that for sure) and hence no data is being produced into the error topic.
What I have tried so far:
After investigations, my understanding is that we are getting into race conditions and to avoid that found suggestions like :
use #DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
Tear off broker after each test ( Please see the ".destry()" on brokers)
use different topic names for each test
I applied all of them and still could not recover from my issue.
I am providing the code here for perusal. Any insight is appreciated.
Code for 1st test (Testing error path):
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.ERROR_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationFailurePathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedFailurePathKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate;
//To read from output error
#Autowired
protected Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedFailurePathKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
.
#TestConfiguration
public class AdapterStreamFailurePathTestConfig {
#Autowired
private EmbeddedKafkaBroker embeddedKafkaBroker;
#Value("${spring.kafka.adapter.application-id}")
private String applicationId;
#Value("${spring.kafka.adapter.group-id}")
private String groupId;
//Producer of records that the program consumes
#Bean
public Map<String, Object> sendEmailCmdProducerConfigs() {
Map<String, Object> results = KafkaTestUtils.producerProps(embeddedKafkaBroker);
results.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.serializer().getClass());
results.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.INPUT_VALUE_SERDE.serializer().getClass());
return results;
}
#Bean
public ProducerFactory<PreferredMediaMsgKey, SendEmailCmd> inputProducerFactory() {
return new DefaultKafkaProducerFactory<>(sendEmailCmdProducerConfigs());
}
#Bean
public KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate() {
return new KafkaTemplate<>(inputProducerFactory());
}
//Consumer of the error output, generated by the program
#Bean
public Map<String, Object> outputErrorConsumerConfig() {
Map<String, Object> props = KafkaTestUtils.consumerProps(
applicationId, Boolean.TRUE.toString(), embeddedKafkaBroker);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.ERROR_VALUE_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer() {
DefaultKafkaConsumerFactory<PreferredMediaMsgKey, ErrorCmd> rpf =
new DefaultKafkaConsumerFactory<>(outputErrorConsumerConfig());
return rpf.createConsumer(groupId, "notification-failure");
}
}
.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = AdapterStreamFailurePathTestConfig.class)
#ActiveProfiles(profiles = "errtest")
public class ErrorPath400Test extends AbstractIntegrationFailurePathTest {
#Autowired
private DataGenaratorForErrorPath400Test datagen;
#Mock
private AdapterHttpClient httpClient;
#Autowired
private ErroredEmailCmdDeserializer erroredEmailCmdDeserializer;
#Before
public void setup() throws InterruptedException {
Mockito.when(httpClient.callApi(Mockito.any()))
.thenReturn(
new GenericResponse(
400,
TestConstants.ERROR_MSG_TO_CHK));
Mockito.when(httpClient.createURI(Mockito.any(),Mockito.any(),Mockito.any())).thenCallRealMethod();
inputProducerTemplate.send(
projectProerties.getInputTopic(),
datagen.getKey(),
datagen.getEmailCmdToProduce());
System.out.println("producer: "+ projectProerties.getInputTopic());
subscribe(outputErrorConsumer , projectProerties.getErrorTopic(), 0);
}
#Test
public void testWithError() throws InterruptedException, InvalidProtocolBufferException, TextFormat.ParseException {
ConsumerRecords<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd> records;
List<ConsumerRecord<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd>> outputListOfErrors = new ArrayList<>();
int attempt = 0;
int expectedRecords = 1;
do {
records = KafkaTestUtils.getRecords(outputErrorConsumer);
records.forEach(outputListOfErrors::add);
attempt++;
} while (attempt < expectedRecords && outputListOfErrors.size() < expectedRecords);
//Verify the recipient event stream size
Assert.assertEquals(expectedRecords, outputListOfErrors.size());
//Validate output
}
#After
public void tearDown() {
outputErrorConsumer.close();
embeddedFailurePathKafkaBroker.destroy();
}
}
2nd test is almost the same in structure. Although this time the test-side-consumer is consuming from application-side-output-topic( instead of error topic). And I named the consumers,broker,producer,topics differently. Like :
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.OUTPUT_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationSuccessPathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey,SendEmailCmd> sendEmailCmdProducerTemplate;
//To read from output regular topic
#Autowired
protected Consumer<PreferredMediaMsgKey, NotifiedEmailCmd> ouputConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
Please let me know if I should provide any more information.,
"port=9092"
Don't use a fixed port; leave that out and the embedded broker will use a random port; the consumer configs are set up in KafkaTestUtils to point to the random port.
You shouldn't need to dirty the context after each test method - use a different group.id for each test and a different topic.
In my case the consumer was not closed properly. I had to do :
#After
public void tearDown() {
// shutdown hook to correctly close the streams application
Runtime.getRuntime().addShutdownHook(new Thread(ouputConsumer::close));
}
to resolve.
Using nunit.engine 3.10.0, I can't stop an asynchronously running ITestRunner. The TestPackage is set up to be executed locally, i.e. InProcess and in the current AppDomain. No more tests are started after the second test as expected, but the while loop never ends.
public static void Main(string[] args)
{
// 2 assemblies x 2 TestFixtures each x 2 Tests each = 8 test cases
string[] testAssemblyFileNames = { TestAssemblyFileName1, TestAssemblyFileName2 };
string assemblyDirectory = Path.GetDirectoryName(Uri.UnescapeDataString(
new UriBuilder(Assembly.GetExecutingAssembly().CodeBase).Path));
// Nunit 3.10.0
var minVersion = new Version("3.4");
ITestEngine testEngine = TestEngineActivator.CreateInstance(minVersion);
// configure a test package that executes
// in the current process and in the current domain
var testPackage = new TestPackage(testAssemblyFileNames);
testPackage.AddSetting(EnginePackageSettings.ProcessModel, "InProcess");
testPackage.AddSetting(EnginePackageSettings.DomainUsage, "None");
testPackage.AddSetting(EnginePackageSettings.DisposeRunners, "True");
testPackage.AddSetting(EnginePackageSettings.WorkDirectory, assemblyDirectory);
ITestRunner testRunner = testEngine.GetRunner(testPackage);
// prepare a listener that stops the test runner
// when the second test has been started
const bool StopAfterSecondTest = true;
int testStartedCount = 0;
var listener = new MyTestEventListener();
listener.TestStarted += (sender, eventArgs) =>
{
testStartedCount++;
if ( StopAfterSecondTest && testStartedCount == 2 )
{
testRunner.StopRun(force: true);
}
};
var testFilterBuilder = new TestFilterBuilder();
TestFilter testFilter = testFilterBuilder.GetFilter();
ITestRun testRun = testRunner.RunAsync(listener, testFilter);
bool keepRunning;
int loopCount = 0;
do
{
bool completed = testRun.Wait(500);
bool running = testRunner.IsTestRunning;
keepRunning = !completed && running;
loopCount++;
} while ( keepRunning );
Console.WriteLine($"Loop count: {loopCount}");
XmlNode resultNode = testRun.Result;
Console.WriteLine(resultNode.InnerText);
Console.ReadKey();
}
private class MyTestEventListener : ITestEventListener
{
private const string TestCaseStartPrefix = "<start-test";
private const string TestMethodTypeAttribute = " type=\"TestMethod\"";
public event EventHandler<EventArgs> TestStarted;
public void OnTestEvent(string report)
{
if ( report.StartsWith(TestCaseStartPrefix) &&
report.Contains(TestMethodTypeAttribute) )
{
TestStarted?.Invoke(this, new EventArgs());
}
}
}
If I skip waiting and try to get the test result, I get an InvalidOperationException: 'Cannot retrieve Result from an incomplete or cancelled TestRun.'
How can I stop the test runner and get the results of the tests that were completed before the stopping?
You can't do it from inside a test. Your listener is executed in the context of the test itself. For that reason, listeners are specifically forbidden from trying to change the outcome of a test. Additionally, the event is buffered and may not even be received in this case until after the test run is complete.
StopRun is intended to be called by the main runner itself, generally as triggered by some user input.
You should also take note of this issue: https://github.com/nunit/nunit/issues/3276 which prevents StopRun(true) from working under any circumstances. It was fixed in PR https://github.com/nunit/nunit/pull/3281 but is not yet in any release of the framework. You will have to either use a recent dev build of the framework or switch to StopRun(false).
Based on the answer by #Charlie, this is how to modify the code in order to stop all threads:
public static void Main(string[] args)
{
// 2 assemblies x 2 TestFixtures each x 2 Tests each = 8 test cases
// each test case includes a 200 ms delay
string[] testAssemblyFileNames = { TestAssemblyFileName1, TestAssemblyFileName2 };
string assemblyDirectory = Path.GetDirectoryName(Uri.UnescapeDataString(
new UriBuilder(Assembly.GetExecutingAssembly().CodeBase).Path));
// Nunit 3.10.0
var minVersion = new Version("3.4");
ITestEngine testEngine = TestEngineActivator.CreateInstance(minVersion);
// configure a test package that executes
// in the current process and in the current domain
var testPackage = new TestPackage(testAssemblyFileNames);
testPackage.AddSetting(EnginePackageSettings.ProcessModel, "InProcess");
testPackage.AddSetting(EnginePackageSettings.DomainUsage, "None");
testPackage.AddSetting(EnginePackageSettings.DisposeRunners, "True");
testPackage.AddSetting(EnginePackageSettings.WorkDirectory, assemblyDirectory);
ITestRunner testRunner = testEngine.GetRunner(testPackage);
var listener = new TestStartListener();
var testFilterBuilder = new TestFilterBuilder();
TestFilter testFilter = testFilterBuilder.GetFilter();
ITestRun testRun = testRunner.RunAsync(listener, testFilter);
// wait until the first test case has been started
while ( listener.Count < 1 )
{
Thread.Sleep(50);
}
bool keepRunning = true;
while ( keepRunning )
{
int testStartedCount = listener.Count;
testRunner.StopRun(force: false);
Writer.WriteLine($"{GetTimeStamp()}, Stop requested after {testStartedCount} test cases.");
// wait for less time than a single test needs to complete
bool completed = testRun.Wait(100);
bool running = testRunner.IsTestRunning;
Writer.WriteLine($"{GetTimeStamp()} Completed: {completed}, running: {running}");
keepRunning = !completed && running;
}
listener.WriteReportsTo(Writer);
XmlNode resultNode = testRun.Result;
Writer.WriteLine("Test result:");
resultNode.WriteContentTo(ResultWriter);
Console.ReadKey();
}
private class TestStartListener : List<string>, ITestEventListener
{
private const string TestCaseStartPrefix = "<start-test";
private const string TestMethodTypeAttribute = " type=\"TestMethod\"";
public event EventHandler<EventArgs> TestStarted;
public void OnTestEvent(string report)
{
if ( report.StartsWith(TestCaseStartPrefix) &&
report.Contains(TestMethodTypeAttribute) )
{
Add($"{GetTimeStamp()}, {report}");
TestStarted?.Invoke(this, new EventArgs());
}
}
public void WriteReportsTo(TextWriter writer)
{
Writer.WriteLine($"Listener was called {Count} times.");
foreach ( var report in this )
{
Writer.WriteLine(report);
}
}
}
The two test assemblies get executed in the runner's process, in a single domain and on two threads, one for each test assembly. In total, two test methods get executed and pass; one for each of the two test assemblies. Other test methods do not get executed and not reported. Other test fixtures (classes) do not get executed and get reported with result="Failed" label="Cancelled".
Note that testRunner.StopRun(force: false) is called repeatedly. If only called once, the other thread will run to completion.
I have a sql query defined in my batch job that needs to get input at runtime from the user.
I have the following item reader in my batch job defined as follows
#StepScope
#Bean
public JdbcCursorItemReader<QueryCount> queryCountItemReader() throws Exception {
ListPreparedStatementSetter preparedStatementSetter = new ListPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement pstmt) throws SQLException {
pstmt.setString(1, "#{jobparameters[fromDate]}");
pstmt.setString(2, "#{jobparameters[toDate]}");
pstmt.setString(3, "#{jobparameters[fromDate]}");
pstmt.setString(4, "#{jobparameters[toDate]}");
pstmt.setString(5, "#{jobparameters[fromDate]}");
pstmt.setString(6, "#{jobparameters[toDate]}");
pstmt.setString(7, "#{jobparameters[eventType]}");
pstmt.setString(8, "#{jobparameters[businessUnit]}");
pstmt.setString(9, "#{jobparameters[deviceCategory]}");
pstmt.setString(10, "#{jobparameters[numberOfSearchIds]}");
}
};
JdbcCursorItemReader<QueryCount> queryCountJdbcCursorItemReader = new JdbcCursorItemReader<>();
queryCountJdbcCursorItemReader.setDataSource(dataSource);
queryCountJdbcCursorItemReader.setSql(sqlQuery);
queryCountJdbcCursorItemReader.setRowMapper(new QueryCountMapper());
queryCountJdbcCursorItemReader.setPreparedStatementSetter(preparedStatementSetter);
int counter = 0;
ExecutionContext executionContext = new ExecutionContext();
queryCountJdbcCursorItemReader.open(executionContext);
try {
QueryCount queryCount;
while ((queryCount = queryCountJdbcCursorItemReader.read()) != null) {
System.out.println(queryCount.toString());
counter++;
}
}catch (Exception e){
e.printStackTrace();
}finally {
queryCountJdbcCursorItemReader.close();
}
return queryCountJdbcCursorItemReader;
}
I am sending in the job parameters from my application class as follows
JobParameters jobParameters = new JobParametersBuilder()
.addString("fromDate", "20180410")
.addString("toDate", "20180410")
.addString("eventType", "WEB")
.addString("businessUnit", "UPT")
.addString("numberOfSearchIds", "10")
.toJobParameters();
JobExecution execution = jobLauncher.run(job, jobParameters);
The issue is, when I run my batch job the code inside the queryCountItemReader() method is never executed and the job completes with no errors. Essentially the sql query I am trying to run never executes. If I remove the #StepScope annotation the code will then run but fail with an error since it is enable to bind the parameters sent in from the application class to the sql query. I realize that #StepScope is necessary to use job parameters but why doesn't the code in my method execute?
Solved this by adding #EnableBatchProcessing & #EnableAutoConfigurationannotations and changing the item reader method definition as follows,
#StepScope
#Bean
public JdbcCursorItemReader<QueryCount> queryCountItemReader(#Value("#{jobParameters['fromDate']}") String fromDate,
#Value("#{jobParameters['toDate']}") String toDate,
#Value("#{jobParameters['eventType']}") String eventType,
#Value("#{jobParameters['businessUnit']}") String businessUnit,
#Value("#{jobParameters['deviceCategory']}") String deviceCategory,
#Value("#{jobParameters['numberOfSearchIds']}") String numberOfSearchIds) throws Exception {
I am trying to launch a job in Spring Batch 2, and I need to pass some information in the job parameters, but I do not want it to count for the uniqueness of the job instance. For example, I'd want these two sets of parameters to be considered unique:
file=/my/file/path,session=1234
file=/my/file/path,session=5678
The idea is that there will be two different servers trying to start the same job, but with different sessions attached to them. I need that session number in both cases. Any ideas?
Thanks!
So, if 'file' is the only attribute that's supposed to be unique and 'session' is used by downstream code, then your problem matches almost exactly what I had. I had a JMSCorrelationId that i needed to store in the execution context for later use and I didn't want it to play into the job parameters' uniqueness. Per Dave Syer, this really wasn't possible, so I took the route of creating the job with the parameters (not the 'session' in your case), and then adding the 'session' attribute to the execution context before anything actually runs.
This gave me access to 'session' downstream but it was not in the job parameters so it didn't affect uniqueness.
References
https://jira.springsource.org/browse/BATCH-1412
http://forum.springsource.org/showthread.php?104440-Non-Identity-Job-Parameters&highlight=
You'll see from this forum that there's no good way to do it (per Dave Syer), but I wrote my own launcher based on the SimpleJobLauncher (in fact I delegate to the SimpleLauncher if a non-overloaded method is called) that has an overloaded method for starting a job that takes a callback interface that allows contribution of parameters to the execution context while not being 'true' job parameters. You could do something very similar.
I think the applicable LOC for you is right here:
jobExecution = jobRepository.createJobExecution(job.getName(),
jobParameters);
if (contributor != null) {
if (contributor.contributeTo(jobExecution.getExecutionContext())) {
jobRepository.updateExecutionContext(jobExecution);
}
}
which is where, after execution context creatin, the execution context is added to. Hopefully this helps you in your implementation.
public class ControlMJobLauncher implements JobLauncher, InitializingBean {
private JobRepository jobRepository;
private TaskExecutor taskExecutor;
private SimpleJobLauncher simpleLauncher;
private JobFilter jobFilter;
public void setJobRepository(JobRepository jobRepository) {
this.jobRepository = jobRepository;
}
public void setTaskExecutor(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
/**
* Optional filter to prevent job launching based on some specific criteria.
* Jobs that are filtered out will return success to ControlM, but will not run
*/
public void setJobFilter(JobFilter jobFilter) {
this.jobFilter = jobFilter;
}
public JobExecution run(final Job job, final JobParameters jobParameters, ExecutionContextContributor contributor)
throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException, JobFilteredException {
Assert.notNull(job, "The Job must not be null.");
Assert.notNull(jobParameters, "The JobParameters must not be null.");
//See if job is filtered
if(this.jobFilter != null && !jobFilter.launchJob(job, jobParameters)) {
throw new JobFilteredException(String.format("Job has been filtered by the filter: %s", jobFilter.getFilterName()));
}
final JobExecution jobExecution;
JobExecution lastExecution = jobRepository.getLastJobExecution(job.getName(), jobParameters);
if (lastExecution != null) {
if (!job.isRestartable()) {
throw new JobRestartException("JobInstance already exists and is not restartable");
}
logger.info(String.format("Restarting job %s instance %d", job.getName(), lastExecution.getId()));
}
// Check the validity of the parameters before doing creating anything
// in the repository...
job.getJobParametersValidator().validate(jobParameters);
/*
* There is a very small probability that a non-restartable job can be
* restarted, but only if another process or thread manages to launch
* <i>and</i> fail a job execution for this instance between the last
* assertion and the next method returning successfully.
*/
jobExecution = jobRepository.createJobExecution(job.getName(),
jobParameters);
if (contributor != null) {
if (contributor.contributeTo(jobExecution.getExecutionContext())) {
jobRepository.updateExecutionContext(jobExecution);
}
}
try {
taskExecutor.execute(new Runnable() {
public void run() {
try {
logger.info("Job: [" + job
+ "] launched with the following parameters: ["
+ jobParameters + "]");
job.execute(jobExecution);
logger.info("Job: ["
+ job
+ "] completed with the following parameters: ["
+ jobParameters
+ "] and the following status: ["
+ jobExecution.getStatus() + "]");
} catch (Throwable t) {
logger.warn(
"Job: ["
+ job
+ "] failed unexpectedly and fatally with the following parameters: ["
+ jobParameters + "]", t);
rethrow(t);
}
}
private void rethrow(Throwable t) {
if (t instanceof RuntimeException) {
throw (RuntimeException) t;
} else if (t instanceof Error) {
throw (Error) t;
}
throw new IllegalStateException(t);
}
});
} catch (TaskRejectedException e) {
jobExecution.upgradeStatus(BatchStatus.FAILED);
if (jobExecution.getExitStatus().equals(ExitStatus.UNKNOWN)) {
jobExecution.setExitStatus(ExitStatus.FAILED
.addExitDescription(e));
}
jobRepository.update(jobExecution);
}
return jobExecution;
}
static interface ExecutionContextContributor {
boolean CONTRIBUTED_SOMETHING = true;
boolean CONTRIBUTED_NOTHING = false;
/**
*
* #param executionContext
* #return true if the exeuctioncontext was contributed to
*/
public boolean contributeTo(ExecutionContext executionContext);
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.state(jobRepository != null, "A JobRepository has not been set.");
if (taskExecutor == null) {
logger.info("No TaskExecutor has been set, defaulting to synchronous executor.");
taskExecutor = new SyncTaskExecutor();
}
this.simpleLauncher = new SimpleJobLauncher();
this.simpleLauncher.setJobRepository(jobRepository);
this.simpleLauncher.setTaskExecutor(taskExecutor);
this.simpleLauncher.afterPropertiesSet();
}
#Override
public JobExecution run(Job job, JobParameters jobParameters)
throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException {
return simpleLauncher.run(job, jobParameters);
}
}
Starting from spring batch 2.2.x, there is support for non-identifying parameters. If you are using CommandLineJobRunner, you can specify non-identifying parameters with '-' prefix.
For example:
java org.springframework.batch.core.launch.support.CommandLineJobRunner file=/my/file/path -session=5678
If you are using old version of spring batch, you need to migrate your database schema. See 'Migrating to 2.x.x' section at http://docs.spring.io/spring-batch/getting-started.html.
This is the Jira page of the feature https://jira.springsource.org/browse/BATCH-1412, and here are the change that implement it https://fisheye.springsource.org/changelog/spring-batch?cs=557515df45c0f596588418d53c3f2bae3781c1c3
In more recent versions of Spring Batch (I am using spring-batch-core:4.3.3), you can use the JobParametersBuilder to specify whether a parameter is identifying or not. For example:
new JobParametersBuilder()
.addString("identifying-param-name", paramValue1)
.addString("non-identifying-param-name", paramValue2, false)
.toJobParameters();
The 'false' in the third argument makes the parameter non-identifying.
I am running JUnit test case from Eclipse 3.4.1 . This test case creates a class which starts a thread to do some stuff. When the test method ends it seems that Eclipse is forcibly shutting down the thread.
If I run the same test from the command line, then the thread runs properly.
Somehow I do not remember running into such problems with Eclipse before. Is this something that was always present in Eclipse or did they add it in 3.4.x ?
Here is an example:
When I run this test from Eclipse, I get a few printts of the cnt (till about 1800) and then the test case is terminated utomatically. However, if I run the main method, which starts JUnit's TestRunner, then the thread counts indefinetely.
import junit.framework.TestCase;
import junit.textui.TestRunner;
/**
* This class shows that Eclipses JUnit test case runner will forcibly
* terminate all running threads
*
* #author pshah
*
*/
public class ThreadTest extends TestCase {
static Runnable run = new Runnable() {
public void run() {
int cnt = 0;
while(true) System.out.println(cnt++);
}
};
public void testThread() {
Thread t = new Thread(run);
t.start();
}
public static void main(String args[]) {
TestRunner runner = new TestRunner();
runner.run(ThreadTest.class);
}
}
I adapted your code to JUnit NG and it's the same result: The thread is killed.
public class ThreadTest {
static Runnable run = new Runnable() {
public void run() {
int cnt = 0;
while (true)
System.out.println(cnt++);
}
};
#Test
public void threadRun() {
Thread t = new Thread(run);
t.start();
assertEquals("RUNNABLE", t.getState().toString());
}
}
If I use the JUnit jar (4.3.1 in my case) from the Eclipe plugin folder to execute the tests via the command line, it has the same behavior like executing it in Eclipse (It's logical :) ).
I tested JUnit 4.6 (just downloaded) in the commandline and it also stops after a short time! It's exactly the same behavior like in Eclipse
I found out, that it is killed if the last instruction is done. It's logical, if you consider how JUnit works:
For each test, a new object is created. If the test is over, it's killed. Everything belonging to this test is killed.
That means, that every thread must be stopped.
JUnit deals correctly with this situation. Unit test must be isolated and easy to execute. So it has to end all threads, if the end of the test is reached.
You may wait, till the test is finished and then execute your assertXXX instruction. This would be the right way to test threads.
But be carefull: It may kill your execution times!
I believe this modification will yield the desired result for unit testing various thread scenarios.
(sorry if the formatting is wonky)
public class ThreadTest {
static Runnable run = new Runnable() {
public void run() {
int cnt = 0;
while (true)
System.out.println(cnt++);
}
};
#Test
public void threadRun() {
Thread t = new Thread(run);
t.start();
//Run the thread, t, for 30 seconds total.
//Assert the thread's state is RUNNABLE, once per second
for(int i=0;i<30;i++){
assertEquals("RUNNABLE", t.getState().toString());
try {
Thread.sleep(1000);//1 second sleep
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Done with my thread unit test.");
}
}
This works but you have to name your thread or find another way to refer to it.
protected boolean monitorSecondaryThread(String threadName, StringBuilder errorMessage, boolean ignoreFailSafe) {
int NUM_THREADS_BESIDES_SECONDARY_THREAD = 2;
int MAX_WAIT_TIME = 10000;
MyUncaughtExceptionHandler meh = new MyUncaughtExceptionHandler();
Set<Thread> threadSet = Thread.getAllStackTraces().keySet();
for (Thread t : threadSet) {
t.setUncaughtExceptionHandler(meh);
}
Date start = Calendar.getInstance().getTime();
boolean stillAlive = true;
while (stillAlive) {
for (Thread t : threadSet) {
if (t.getName().equalsIgnoreCase(threadName) && !t.isAlive()) {
stillAlive = false;
}
}
Date end = Calendar.getInstance().getTime();
if (!ignoreFailSafe && (end.getTime() - start.getTime() > MAX_WAIT_TIME || Thread.activeCount() <= NUM_THREADS_BESIDES_SECONDARY_THREAD)) {
System.out.println("Oops, flawed thread monitor.");
stillAlive = false;
}
}
if (meh.errorCount > 0) {
System.out.println(meh.error);
errorMessage.append(meh.error);
return false;
}
return true;
}
private class MyUncaughtExceptionHandler implements UncaughtExceptionHandler {
public int errorCount = 0;
public String error = "";
#Override
public void uncaughtException(Thread t, Throwable e) {
ByteArrayOutputStream bs = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(bs);
e.printStackTrace(ps);
error = bs.toString();
errorCount++;
}
}