Salesforce deployment error because of test class failure - deployment

We are encountering the deployment error due to some test classes where batch apex class is called. The error occurring is:
"System.unexpectedException:No more than one executeBatch can be called within a test method."
In our test class, there are insert and update statements which in turn calls the batch apex from a trigger. We have also tried to limit the batch query by using "Test.isRunningTest()" method but we are again facing the same error.
The code works fine in sandbox and the error is coming only at the time of deployment to production.
Also, the test classes causing the error were working fine previously in the production.
Please provide some pointers/solution for the above mentioned error.
Thank you.

I would suggest the best approach would be to ensure the trigger doesn't execute the batch if Test.IsRunningTest() is true, and then test the batch class with it's own test method. I suspect your trigger is fired twice and so batch instances are created and run more than one.
Using a dedicated test method you can execute the batch specifying a limit on the query, and you should use the optional batch size parameter to control the number of calls to execute, i.e. if your limit is 50, but you do this:
Database.executeBatch(myBatchInstance, 25);
It'll still need to call the execute() method twice to cover all the records, and this is where you hit problems like the one you mentioned.

Related

Understanding JobLauncherTestUtils

I am currently getting to understand jobLauncherTestUtils. I have read about it from multiple resources such as following:
https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/test/JobLauncherTestUtils.html
https://livebook.manning.com/concept/spring/joblaunchertestutils
I wanted to understand when we call jobLauncherTestUtils.launchJob(), what does it mean by end-to-end testing of job. Does it actually launch the job. If so, then what's the point of testing the job without mocks? If not so, then how does it actually tests a job?
I wanted to understand when we call jobLauncherTestUtils.launchJob(), what does it mean by end-to-end testing of job.
End-to-End testing means testing the job as a black box based on the specification of its input and output. For example, let's assume your batch job is expected to read data from a database table and write it to a flat file.
And end-to-end test would:
Populate a test database with some sample records
Run your job
Assert that the output file contains the expected records
Without individually testing the inner steps of this job, you are testing its functionality from end (input) to end (output).
JobLauncherTestUtils is a utility class that allows you to run an entire job like this. It also allows you to test a single step from a job in isolation if you want.
Does it actually launch the job.
Yes, the job will be run as if it was run outside a test. JobLauncherTestUtils is just an utility class that uses a regular JobLauncher behind the scene. You can run your job in unit tests without this utility class.
If so, then what's the point of testing the job without mocks?
The point of testing a job without mocks is to ensure the job is working as expected with real resources it depends on or interact with. You can always mock a database or a message broker in your tests, but the mocking code could be buggy and does not reflect the real behaviour of a database or a message broker.

A file prepared by one spring batch job is not accessible to other for deletion

I have a requirement where I have to prepare a file using one job and another job which runs once a day will send the file to external system and delete/or move from the location. When this job tries to delete/or move the file it can't access it.
I tried setting writable to true when file is created. Running jobs on separate times (Running one job at a time). Tried adding "delete" as a step to the same job as well. Nothing worked.
I am using file.delete(). Also tried Files.deleteIfExists().
I suspect the first job is not assigning proper permissions but don't know a way around it set permissions in spring batch
Are these jobs run by the same user? i.e. Same user and permissions?
Also what is the actual error message? Does it say permissions denied? If so they it is likely an OS restriction not Spring Batch/Java limitation.
An easier solution would be to just add a step to the first job to send the files are part of the job and drop the job that just transfers the files.
Answering my own question 😀. Hope it helps someone.
Issue was the last ItemWriter was holding the resources because I was using the composite writer. While using CompositeWriter beforeStep, afterStep methods are “hidden”. You have to call them explicitely. I selected the approach to write a custom writer which will explicitely call writer.close().
Adding afterStep method and calling super.close() should also work. Though I have nit tries that out.

How can I roll back DAO tests using scala PlaySpec and Slick

I'm trying to flesh out my application's abstract DAO test harness to support rolling back any test modifications to the database. I know slick supports transactions with (db.run(<some DBIOAction>.transactionally), but that doesn't work as part of an abstract test class, as the db actions need to actually be run in the actual test method.
Currently, I'm working with attempting to wrap the test method with BeforeAndAfter's runTest method and attempting to find some slick method that allows me to wrap the test execution in a transaction. It feels like the correct first step, but I'm struggling to figure out how to not interfere with regular test creation while still being able to roll back transactions (i.e. I don't want to have to manually add a DBIOAction.failure in every DB test that changes the DB state).
I've tried setting autocommit=false around the method, e.g.
db.run(
SimpleJdbcOperation(_.connection.setAutoCommit(false)) andThen
DBIOAction.successful(super.runTest) zip
SimpleJdbcOperation(_.connection.rollBack()))
but I think the connection pool is foiling that particular method, as getting the autocommit status inside the test method returns true and the rollback doesn't do anything.
Is there anything I can do here short of hacky (manual DBIOAction.failure()) or wasteful (drop and recreate table/schema after every test) solutions?
For now I'm going with https://stackoverflow.com/a/34953817/1676006, but I still feel like there should be a better way.

Play Model save function isn't actually writing to the database

I have a play model called "JobStatus" and it's just got one property, an enum with a JobState, (Running/notRunning).
The class extends model and is implemented as a singleton. You call it's getInstance() method to get the only record in the underlying table.
I have a job that runs every month and in the job I will toggle the state of the JobStatus object back and forth at various times and call .save().
I've noticed it isn't actually saving.
When the job starts off, it's first line of code is
JobStatus thisJobStatus = jobStatus.getInstance();
...// exit if already running
thisJobStatus.JobState = JobState.Running;
thisJobStatus.save()
then when the job is done it will change the status back to NotRunning and save again.
The issue is that when I look in the MySql database the actual record value is never changed.
This causes a catastrophic failure because when other nodes try to run the job they check the state and since they're seeing it as NotRunning, they all try to run the job also.
So my clever scheme for managing job state is failing because the actual value isn't getting commited to the DB.
How do I force Play to write to the DB right away when I call .save() on a model?
Thanks
Josh
try adding this to your JobStatus and call it after save.
public static void commit(){
JobStatus.em().getTransaction().commit();
JobStatus.em().getTransaction().begin();
JobStatus.em().flush();
JobStatus.em().clear();
}
I suppose you want to mark your job as "running" pretty much as the first thing when the job starts? In that case, you shouldn't have any other ongoing database statements yet...
To commit your changes in the database immediately (instead of after the job has ended), add the following commands after the thisJobStatus.save(); method call:
JPA.em().flush();
JPA.em().getTransaction().commit();
Additionally, since you're using MySQL, you might want to lock the row immediately upon retriveval using the SELECT ... FOR UPDATE clause. (See MySQL Reference Manual for more information.) Of course, you wouldn't want to have that in your getInstance() method, otherwise every fetch operation would lock the record.

Retry period after an unhandled exception in a Workflow

Currently in our workflow application if it encounters an unhandled exception it will reload the workflow from the most recently persisted state and try again. Are there any ways to configure how this works exactly? If a service is down for example the workflow will reload around every second and try to run again which when there are multiple workflows all doing the same thing can result in thousands of exceptions per minute.
I think that using the timeToPersist and timeToUnload properties on workflowIdle might have something to do with this. Currently we have this set to:
If I set timeToUnload to 1 minute will that mean the workflow will only be able to retry once every minute?
TimeToPersist and TimeToUnload won't come into play here- those values determine how long a workflow has to be idle before being persisted/unloaded.
You can probably use WorkflowApplication.OnUnhandledException to create a catch-all exception handler (assuming you're using this class to create workflows).
http://msdn.microsoft.com/en-us/library/system.activities.workflowapplication.onunhandledexception.aspx