Salesforce system scheduled job not executing - scheduled-tasks

I have created a scheduler class called scheduledInsert. The scheduler Job for this class is registered with following code
public class TestInsertTaskScheduler
{
public static testMethod void testInsertTaskScheduler()
{
scheduledInsert i = new scheduledInsert();
Datetime now = Datetime.now();
System.debug ('Datetime'+now);
String sch = '0 1 * * * ?'; // scheduled to execute every minute
system.schedule('Insert Task S3', sch, i);
System.debug ('After schedule');
}
}
The scheduled class code is
global class scheduledInsert implements Schedulable
{
global void execute(SchedulableContext SC)
{
System.debug('scheduled insert');
}
}
This job (Insert Task S3) does not display under Monitoring scheduled Jobs.
Also the Job does not execute at all.
What's the mistake i am making?

I'm fairly sure it's because you're using a testmethod. When you use a testmethod nothing is committed to the Salesforce database; so, it would make sense that the job doesn't appear in the Scheduled Jobs section. Try removing the testmethod keyword, run it again, and see if it appears.

Related

Apache Shiro; SecurityUtils.getSubject() for a schedule user?

I want to implement the following scenario:
I have a EJB scheduler, which should run every 1 minute.
My issue is the following:
I need a login for the user, who execute the schedule. This should be a system user. There will be no login via GUI.
How can I login this user and execute further task?
Currently I´m trying in my class:
#Singleton
#LocalBean
#Startup
public class Scheduler {
public void startSchedule() {
Subject currentUserShiro = SecurityUtils..getSubject();
UsernamePasswordToken token = new UsernamePasswordToken("test#domain.com", "test1234");
currentUserShiro.login(token);
In one of my function, I check e.g. for the permission:
SecurityUtils.getSubject().isPermitted("billingInvoice:create")
I´m getting currently the following issue:
No SecurityManager accessible to the calling code, either bound to the org.apache.shiro.util.ThreadContext or as a vm static singleton. This is an invalid application configuration.
Any idea?
Code update:
private void addScheduleToList(ScheduleExecution scheduleExecution) throws UnknownHostException {
synchronized (this) {
Factory<SecurityManager> factory = new IniSecurityManagerFactory("classpath:shiro-web.ini");
SecurityManager securityManager = factory.getInstance();
SecurityUtils.setSecurityManager(securityManager);
Subject subject = new Subject.Builder().buildSubject();
Runnable myRunnable = null;
subject.execute(myRunnable = new Runnable() {
#Override
public void run() {
// Add tasks
executeAction(scheduleExecution);
}
});
//////
schedulingService.addTaskToExecutor(taskId, myRunnable, 0);
}
}
I´m gettig now not anymore the issue message which I got initialy, but it seems I´m getting PermissionException, because the user has not the permission? If I check the Subject object, this object is not authenticated. This Subject object needs full permission. How can I do this?
SecurityUtils.getSubject().isPermitted("billingInvoice:create") == false
You have a couple of options.
Move your permission checking to your web-based methods, this moves the permission checkout outside of your scheduled task.
(this isn't always possible and since you are asking the question, I'm assuming you want the 2nd option)
You need to execute your task in the context of a user. Create a new Subject using a SubjectBuilder and then call the execute with a runnable from your task.
See https://shiro.apache.org/subject.html specifically the "Thread Association" section.

why spring batch creates a new instance when i try to restart the day after a failed job?

image datas from database
I have a failed execution instance in my repository on date 2016-03-14.
If i try to restart the job's instance on the date of 2016-03-15, a new instance and a new execution with the previous job's parameters (2016-03-14) are creating.
But the job is restarting a complete step instead of doing a recover process(starting at the last line before the fail event).
Why i have a new instance?
If i restart on the same day (failed job and the restarted job) i have no problem (one instance sharing between job's execution).
EDIT:
I start my job with this code:
#Bean
public Job myJob(JobBuilderFactory jobs, Step stepInjectCsvWsIntoCsv) {
return jobs.get("myJob")
.listener(new JobListener())
.incrementer(new RunIdDateIncrementor())
.flow(stepInjectCsvWsIntoCsv)
.end().build();
}
RunIdDateIncrementor is my own class. It's here that i create parameters (run.id and run.date)
I use a FlatItemReader and a CompositeWriter which manage two MultiResourceItemWriter and implements ResourceAwareItemWriterItemStream
And the step configuration :
#Bean(name = "stepInjectCsvWsIntoCsv")
public Step stepInjectCsvWsIntoCsv(StepBuilderFactory stepBuilderFactory, ItemReader<GetDataInCsv> csvReader,
CompositeTwoCsvFileItemWriter getDataWriter,
ItemProcessor<GetDataInCsv, List<GetDataOutCsv>> getDataProcessor
) {
/* it handles bunches of 10 units => limité à 10 stations*/
return stepBuilderFactory.get("stepInjectCsvWsIntoCsv").listener(new StepListener())
.<GetDataInCsv, List<GetDataOutCsv>> chunk(1)
.reader(csvReader).processor(getDataProcessor).writer(getDataWriter)
.faultTolerant().skipLimit(1000).skip(GetDataFault.class)
.listener(new CustomChunkListener())
.listener(new CustomItemReaderListener())
.listener(new GetDataItemProcessListener())
.listener(new CustomItemWriterListener())
.build();
}
I have a new instance then an empty execution context and so the restart isn't detected.
I use SPRING BOOT too.
The Launch
#SpringBootApplication
public class BatchWsVersCsv implements CommandLineRunner {
public static void main(String[] args) {
Logger logger = LoggerFactory.getLogger(BatchWsVersCsv.class);
SpringApplication springApplication = new SpringApplication(new Object[] { BatchWsVersCsv.class });
Map<String, Object> defaultProperties = new HashMap<String, Object>();
//set some default properties
//...
springApplication.setDefaultProperties(defaultProperties);
springApplication.run(args);
}
public void run(String... strings) throws Exception {
System.out.println("running...");
}
}
Ok, it's my bad. For testing an error event on day before, i do a request in data base for updating all date.
So the serialized key in batch_instance isn't match anymore.
If i change the system date for generating error on day before, all works prefectly if i run a restart the day after.

JPA: Automatically delete data after timeinterval

I've the following scenario:
Data will be created by the sender on the server.
If the receiver won't request the data within (for example) three days, the data should be deleted otherwise the receiver gets the data and after getting the data, the data will be deleted.
In other words:
The data should only exist for a limited time.
I could create a little jar-file which will delete the data manually, probably started as a cron-job. But i think, that that is a very bad solution.
Is it possible to create a trigger with JPA/Java EE that will be called after a specific time period? As far as i know trigger can only be called after/before insert, update, delete events. So that won't be a solution.
Note: I'm currently using H2-Database and Wildfly for my RESTeasy application. But i might change that in future, so the solution should be adaptive.
Best regards,
Maik
Java EE brings everything you need. You can #Schedule a periodically executed cleanup job:
#Singleton
public class Cleanup {
#Schedule(minute = "*/5")
public void clean() {
// will be called every 5 minutes
}
}
Or you programmatically create a Timer which will be executed after a certain amount of time:
#Singleton
public class Cleanup {
#Resource
TimerService timerService;
public void deleteLater(long id) {
String timerInfo = MyEntity.class.getName() + ':' + id;
long timeout = 1000 * 60 * 5; // 5 minutes
timerService.createTimer(timeout, timerInfo);
}
#Timeout
public void handleTimeout(Timer timer) {
Serializable timerInfo = timer.getInfo(); // the information you passed to the timer
}
}
More information can be found in the Timer Service Tutorial.

How can I schedule a child activity to execute *before* the parent is completed?

I have a WF (4.5) workflow activity that creates a child workflow (evaluating a VisualBasicValue expression). I need the result before I complete the parent workflow.
I add the expression to the metadata like this:
private VisualBasicValue<string> _expression;
protected override void CacheMetadata(NativeActivityMetadata metadata)
{
base.CacheMetadata(metadata);
var visualBasicValue = (VisualBasicValue<string>)(_childActivity.Text.Expression);
var expressionText = visualBasicValue.ExpressionText;
_expression = new VisualBasicValue<string>(expressionText);
metadata.AddChild(_expression);
}
I tried scheduling the activity in the Execute method like this:
protected override void Execute(NativeActivityContext context)
{
context.ScheduleActivity(context, _expression, OnCompleted);
Result.Set(context, _value);
}
With a callback of:
private void OnCompleted(NativeActivityContext context, ActivityInstance completedInstance, string result)
{
_value = result;
}
Unfortunately, the _expression activity is only executed after the parent's execution method returns. Adding it as an implementation child doesn't work (it cannot work as an implementation child, as it is supposed to evaluate an expression that contains variables external to the parent).
Any ideas how to overcome this and execute within the execution context?
In code, as in real life, you can't schedule something to the past (yet :).
ScheduleActivity() will place the activity within an execution queue and execute it as soon as it can. As the parent activity is still running, _expression will only execute after it. Bottom-line, it's an asynchronous call.
If you want to control when _expression is called, just use WorkflowInvoker to execute it, synchronously, whenever you want.
public class MyNativeActivity : NativeActivity
{
private readonly VisualBasicValue<string> _expression;
public MyNativeActivity()
{
// 'expression' construction logic goes here
_expression = new VisualBasicValue<string>("\"Hi!\"");
}
protected override void Execute(NativeActivityContext context)
{
var _value = WorkflowInvoker.Invoke(_expression);
Console.WriteLine("Value returned by '_expression': " + _value);
// use '_value' for something else...
}
}
Took me a few days but I managed to resolve my own issue (without breaking the normal of how WF works).
What I ended up doing is, using reflection, iterated over the child's properties and created a LinkedList of evaluation expressions (using VisualBasicValue) of each of its arguments, in the CacheMetadata method. Then in the execution phase, I scheduled the execution of the first evaluation. In its callback I iterate over the remaining evaluations, scheduling the execution of the next evaluations, adding the result to a dictionary, until its done.
Finally, if there are no more evaluations to schedule, I schedule a final activity that takes the dictionary as its argument, and can do whatever it wants with it. Upon its own, it optionally returns the final result to the container's OutArgument.
What I previously failed to understand, is that even though the scheduling occurs after the instantiating activity's execution, the callback runs before control is returned to the host workflow application, and in that space I could work.

How to use merge

In my Web tier I show a list of Tasks and for each task it shows how many TaskErrors it has. The tasks are fetched in a controller class this way:
public List<Task> getTasks () {
return taskEJB.findAll();
}
Task has a #OneToMany mapping with TaskErrors: a task can have multiple task errors.
When I delete a single TaskError, a controller in the Web tier calls TaskErrorFacade:remove(taskError). TaskErrorFacade is a stateless session bean. The code:
public void remove (TaskError taskError) {
entityManager.remove(entityManager.merge(taskError));
Task task = taskError.getTask();
task.deleteTaskError(taskError);
task = entityManager.merge(task);
}
Task:deleteTaskErrors(TaskError taskError) simply deletes the task from the List:
public void deleteTaskError (TaskError p_error) {
taskErrorCollection.remove(p_error); // gives true
}
This works. The Web tier shows a tabel with all the task and the number of Task Errors it has. The number drops by 1 after deleting a TaskError.
But... the following two implementations of TaskErrorFacade:remove(taskError) don't work: the tabel in the Web tier doesn't subtract 1 of the total Task Error count (however, they are deleted in the database):
public void remove (TaskError taskError) {
entityManager.remove(entityManager.merge(taskError));
Task task = taskError.getTask();
task = entityManager.merge(task); // the merge is done before the deletion
task.deleteTaskError(taskError);
}
And:
public void remove (TaskError taskError) {
entityManager.remove(entityManager.merge(taskError));
taskError.getTask().deleteTaskError(taskError); // no merge
}
Why are the two implementations above not working?
Edit: also important are the mappings of the relations. Task is mapped in TaskError this way:
#JoinColumn(name = "task_id", referencedColumnName = "id")
#ManyToOne(optional = false)
private Task task;