How to set a different Rundeck user in a scheduled job? - rundeck

I'm wondering if it's possible to set a different Rundeck User to run a scheduled job? I created a user that can only perform 1 thing which is to run jobs. There is an existing job already that is scheduled to run every minute but I can't seem to find any options to change the user who will run the job. Thanks in advance to those who will answer.

Currently, the only way to change the scheduled job user is to save the same job as another user. But that sounds good for enhancement.

Related

Admin URL for currently running Celery task

I want to show my admin user a hyperlink to a running celery task.
I use django_celery_results, but this only contains rows for completed tasks.
I could write a simple view, but I guess there is already a much more professional solution.
Steps:
Admin user selects several rows
My custom admin actino creates an async celery tasks
I want to show the admin user an hyperlink where he can wait for the result.
As soon as the result is available a redirect to the django_celery_results admin view should happen.
Which celery add-on could help?
There is no need for any add-on I think to accomplish this. You need to subscribe your service to get notified of specific events (task-succeeded, task-received and task-failed come first to mind). Look at Monitoring section of Celery documentation. Real-time processing in particular as it gives some example code to start with... This is how Flower works for an example.

Way to persist process spawned by a task on an agent?

I'm developing an Azure Devops extension with tasks in it. In one of the tasks, I'm starting a process and I'm doing configurations. In another task, I'm accessing the same process API to consume it. This is working perfectly fine, but I notice that after the job is done, my process is killed. I was planning to allow the user to do the configuration on an agent and be able to access it in another job or pipeline.
Is there a way to persist a process on an agent? I feel like the agent is killing every child processes created on cleanup. Where can I find documentation on this?
Edit: I managed to find this thread that talks about a certain Process.clean variable but there's not any more information about it and I didn't find documentation on it.
Your feeling is correct. Agents clean up spawned processes when the job finishes, and that's by design. A single machine can have multiple agents on it, and multiple agents can be running tasks in parallel. What if you have one machine with 10 agents on it, and they all start up this process at once?
IMO, the approach you're taking is suspect. If you need to persist information across jobs, there are numerous ways to do so (for example, an output variable containing JSON) that don't involve spawning a service that continues running outside the scope of the job that started it.

Configure email alert for any job/workflow failures in Hue/Oozie?

I would like to get email notifications if any job/workflow failed in Oozie. I am using Hue to monitor the workflows.
I don't want to add email action in each and every workflow because I have around 60 workflows already running.
I am also aware of the approach of sub-workflow, even with this approach I have to edit all my 60 workflows and restart co-coordinator to reflect the change.
Is it possible in Oozie or Hue to get notification for any job failures without modifying the workflow? Can we configure something at Oozie/Hue level to get email notifications?
There is no option out of the box, Oozie SLA connecting to your monitoring is often used for that. But this would require an update of the workflows.
In the future, an option could also be added to Hue to automatically add the email action on failure to any workflow, but this would need to be developed.
Without touching your workflows you would need to scrape the Oozie jobs API, but this is also kind of rebuilding the wheel.

How to use Cron to run commands when jobs in a particular folder complete?

I am running a long simulation on our cluster. I submit dozens of jobs first, each job is "hold on" its previous one, so that the simulation could be extended to the period that I want.
Due to the limitation of total jobs we could submit, I have to submit many jobs every day, when the previous jobs have been completed.
I feel it is time-consuming to do this every day. So I wonder if Cron could
monitor if all the jobs launched in a particular folder have been
completed on the cluster
if yes, execute the commands written in a job.sh file, to submit
more jobs within that particular folder.
I am also happy if other methods could be used except Cron.
Thank you.
You may also be interested in trying BeyondCron, which is currently available for early adopters. Using conditions you should be able to solve you problem.

using PowerShell to create automated systens

I'm looking forward to develop an automated notification and logging-off system that
notifies and logs off accounts from a computer. So far I planned an example when a class is
scheduled, except accounts that are registered on the scheduled class. It may
notify the logged-in users a certain period of time before the class time and
log them off just before the class time. Or, it could limit their access, for
example to the printer once the class has started.
So my Question is can I use PowerShell to develop this project ? How far can it be useful, or I should think about using python!
Thanks Fellas!
I'm not sure PowerShell brings anything special to the party. What you are talking about would require a PowerShell session running in the background and perhaps even tying into some sort of eventing, perhaps with the timer class. It might be just as easy to automate something using the task scheduler. At the appointed time check the logged on user and if they don't meet the requirement log them off. You could use PowerShell to create the tasks and handle the processing or any other language really.