I am being driven mad with some PeopleSoft jobs that I'm scheduling. Job Sets and PSJobs with Crystal will not initiate the Crystal processes. The process (or any other Cyrstal process I try) will run if scheduled independently but in any form of PSJob or Job Set they remain with teh first Crystal job status=queued.
I've spent a day googling, reading Metalink, etc. Anyone got any ideas?
Cheers
Karl
I have seen this happen when the job is run on a unix process scheduler and, as you probably know, crystal will only run on a windows process scheduler. The solution was to force the entire job to run on a windows process scheduler.
Related
I have a scheduled parallel Datastage (11.7) job.
This job has a Hive Connector with a Before and After Statement.
The before statement run ok but After statement remains in running state for several hours (on Hue Log i see this job finished in 1hour) and i have to manually abort it on Datastage Director.
Is there the way to "program an abort"?
For example i want schedule the interruption of the running job every morning at 6.
I hope I was clear :)
Even though you can kill the job - as per other responses - using dsjob to stop the job, this may have no effect because the After statement has been issued synchronously; the job is waiting for it to finish, and (probably) not processing kill signals and the like in the meantime. You would be better advised to work out why the After command is taking too long, and addressing that.
I had developed a Job in Talend and built the job and automated to run the Windows Batch file from the below build
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish
Time Taken
Manual -- 5 Minutes
Automation -- 4 hours (on invoking Windows batch file)
Can someone please tell me what is wrong with this Automation Process
The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data.
Make sure you scheduler and database instance is on the same server
Execute the job directly in the windows terminal and check if you have same issue
The easiest way to know what is taking so much time is to add some logs to your job.
First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest.
Then add more logs before/after the components inside the jobs.
This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...)
Informatica Workflow Scheduling with Autosys.
I am trying to understand more about the Informatica Workflow Scheduling with Autosys.
Assume I have an Informatica workflow wf_test and a UNIX script say test.sh with pmcmd command to run this workflow. Also, I wrote a JIL
(test.jil) for Autosys to schedule my test.sh. at daily 10:00 PM.
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Can anyone shed some light about the communication between Autosys and Informatica?
Do we need to have both Informatica and Autosys server installed on the same server?
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Additionally, can we directly give informatica details to Autosys without any script?
Many Thanks
aks
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Autosys is a scheduling tool. An autosys job keep checking every 5 seconds, if any job is scheduled to run, based on the jil. When the time comes and the condition satisfied, it will run the given command on the given host. It could be a pmcmd command or any shell script.
Can anyone shed some light about the communication between Autosys and Informatica?
The communication should be between Autosys Server and the server where Informatica is installed. Read this article. Additionally check if your autosys engineering team on steps to implement the same in your project/environment.
Do we need to have both Informatica and Autosys server installed on the same server?
Definately not. It should be separated. But the connectivity should be established.
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Yes, Read the article given in point 2.
Additionally, can we directly give informatica details to Autosys without any script?
Yes. You can mentioned the whole pmcmd command.
As Autosys is scheduling tool , it will trigger command at specified time mentioned in the Job jil , the important part here is , we also mention the machine name where we want to execute that particular command.
So to answer your question, Autosys and Informatica can be on different servers , provided Autosys agent is configured on Informatica server and the Informatica machine/server details are configured in Autosys.(its like creating a machine on Autosys similiar to creating Global variable or a Job)
As we are running our workflows through shell scripts using pmcmd command , and not to mention Autosys and Informatica are on different servers, there might be way you can directly call Workflows from Autosys but that will make things complicated when you're working at large scale calling 1000s of workflows, Instead having a generic script to call pmcmd which can utilised by multiple workflows seems an easier option.
All Autosys does is "run a command at a specified time" in this case. It's completely unaware of Informatica. It doesn't need to be on the same server as there simply is no communication between them.
All it needs, is the access to the test.sh script, wherever it is. And this, in turn, needs to be able to run the pmcmd utility. So in most basic setup, the Informatica >client< with the pmcmd could be on the same server with Autosys. Informatica Server just needs to be reachable to pmcmd.
I would suggest you to schedule the jobs using the in-built scheduler service,available from 10.x version. You don't have to even write a pmcmd command to trigger the workflow.
A job has been submitted and an entry is also there in dba_jobs but this job is not comming in the running state.So there is no entry for the job in dba_jobs_running.But the parameter 'JOB_QUEUE_PROCESS' has the value 10
and there are no jobs in the running state.Please suggest how to solve this problem.
SELECT NEXT_DATE, NEXT_SEC, BROKEN, FAILURES, WHAT
FROM DBA_JOBS
WHERE JOB = :JOB_ID
What's that return? A BROKEN job won't kick off, and if the NEXT_DATE/NEXT_SEC is in the past, it won't kick off either.
I hope you labeled that database parameter correctly i.e. 'JOB_QUEUE_PROCESSES=10'.
This is typically why a job won't run.
Also check that the user/schema that is running the job is correct too.
An alternative is to use a different scheduling tool to run the job (i.e. cron on linux)
I am writing an application to allow users to schedule one-time long-running tasks from a web application (Linux/Apache/CGI::Application). To do this I use the Schedule::At module which is the Perl interface to the "at" command. Since the scheduled tasks are not repeating, I am not considering "cron". I have two issues with "at" though:
Scheduling works fine when my CGI application runs under the suexec wrapper, but not when scheduled by the owner of the Apache process. How can I get scheduling to work in both environments (suexec and no-suexec)?
It appears that the processes scheduled by "at" or Schedule::At have no failure reporting, and I sometimes find that scheduled tasks fail silently. Is there some way to log the fact that the scheduled task (not the scheduler itself) has failed to run?
I am not fixed on "at" and am open to using other, more robust, scheduling methods if there are any.
Thank you for your attention.
I've heard good things about The Schwartz . It doesn't have a delay-until though; you'd submit the jobs via at, but that should solve both of the problems you list above, as long as your submit_job script was simple.
(as a caveat, I've only used Gearman, I think you'd want a reliable job queue for this, a "fire and forget" mechanism, so you can keep your submit_job dumb.)