Is there a Way to stop an Azure virtual Machine after two hours? - powershell

I already have written a PowerShell Script, which starts an Azure Virtual Machine over a POST request to Azure Automation. For cost reasons, those Machines should automatically stop after two hours.
Is there a Way/Function to do this easily?

For your requirement, you need to calculate the duration that the VM running yourself. You can get the event time that the last start time when the VM is in the running time. It's the UTC time. Then calculate the duration up to now yourself. Here is the Azure CLI command to get the event time:
az vm get-instance-view -g yourResourceGroup -n yourVM --query instanceView.statuses
The screenshot of the result here:
Or you can filter the activity log to get the last event "Start Virtual Machine" time. Below is the Azure CLI command:
az monitor activity-log list --resource-id yourVM_resourceId --query "[?operationName.localizedValue == 'Start Virtual Machine'].eventTimestamp" --max-events 1
I think it should be an interval query task to last for two hours as you need. In my own opinion, it should run in the script that starts the VM.

Related

Azure DevOps Agent - Custom Setup/Teardown Operations

We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53

Run ansible playbook against Windows and then start another Playbook

I am running an Ansible Playbook that builds a VMware machine from an iso using ADO pipelines and it's exceeding the time allowed to run (60 minutes) so I want to break it up into two different playbooks and ADO pipelines. My question is how would I pass the randomly generated machine name to the second playbook/pipeline?

Azure Devops: Queue a build to run in the evening

We are trying to queue from code a build but that should not run instantly but in the evening as our build pipeline is quite free in the evening and this job does not need to be run right away.
We are queuing around 20 or those builds on a daily basis and right now it is unfortunately blocking other builds. I know that we can use build priorities but it is not good enough as the build we want to "postpone" takes quite a long time and would block other builds if it would be started before the high importance build.
We also saw that it is possible to create a schedule but this sounds more like a build that should reoccur where we need the build to run only once.
There is a work-around to achieve running a build once at an appointed time using Azure CLI and CMD scheduled task. You can try to follow below steps.
1, you need to install Azure CLI. You can follow the steps in this blogs to get started with Azure CLI. [blog]:https://devblogs.microsoft.com/devops/using-azure-devops-from-the-command-line/
2, Create a CMD script like below and save it to your local disk, For more information about az pipelines commands go to https://learn.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines/build?view=azure-cli-latest#ext-azure-devops-az-pipelines-build-queue
az pipelines build queue --definition-name your-build-definition-name -o table
3,create a scheduled CMD task script using schtask.exe like below example, for more information visit https://www.windowscentral.com/how-create-task-using-task-scheduler-command-prompt
schtasks /create /tn "give-your-task-a-name" /tr "the-location-of-the-scripts-file-you-created-in-previous-step" /sc ONCE /st specify-the-time-to-run-your-build
You can save this script to your local disk too, Next time you can just run this scripts when you want to schedule your build to run in the evening.
Hope above steps can help you, This workaround seems tedious and need a little effort. But it is an once and for all work.
Azure Devops: Queue a build to run in the evening
Trigger build only once is not available for now. As you saw, there only as working days, time and time zone for schedule.
There has an user voice Scheduled builds - More flexible timing configuration which suggest more flexible time configuration including. You can vote and follow up for this user voice.
As the comment on that thread, we could Use cron syntax to specify schedules in a YAML file. As test, we can get a more detailed timing configuration, but we still could not schedule the build to run only once.
As workaround, we could schedule the build on a certain day of the week, after schedule build completed, Then we could disable the schedule manually or using the tool Azure DevOps CLI.
Hope this helps.

Informatica Workflow Scheduling with Autosys

Informatica Workflow Scheduling with Autosys.
I am trying to understand more about the Informatica Workflow Scheduling with Autosys.
Assume I have an Informatica workflow wf_test and a UNIX script say test.sh with pmcmd command to run this workflow. Also, I wrote a JIL
(test.jil) for Autosys to schedule my test.sh. at daily 10:00 PM.
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Can anyone shed some light about the communication between Autosys and Informatica?
Do we need to have both Informatica and Autosys server installed on the same server?
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Additionally, can we directly give informatica details to Autosys without any script?
Many Thanks
aks
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Autosys is a scheduling tool. An autosys job keep checking every 5 seconds, if any job is scheduled to run, based on the jil. When the time comes and the condition satisfied, it will run the given command on the given host. It could be a pmcmd command or any shell script.
Can anyone shed some light about the communication between Autosys and Informatica?
The communication should be between Autosys Server and the server where Informatica is installed. Read this article. Additionally check if your autosys engineering team on steps to implement the same in your project/environment.
Do we need to have both Informatica and Autosys server installed on the same server?
Definately not. It should be separated. But the connectivity should be established.
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Yes, Read the article given in point 2.
Additionally, can we directly give informatica details to Autosys without any script?
Yes. You can mentioned the whole pmcmd command.
As Autosys is scheduling tool , it will trigger command at specified time mentioned in the Job jil , the important part here is , we also mention the machine name where we want to execute that particular command.
So to answer your question, Autosys and Informatica can be on different servers , provided Autosys agent is configured on Informatica server and the Informatica machine/server details are configured in Autosys.(its like creating a machine on Autosys similiar to creating Global variable or a Job)
As we are running our workflows through shell scripts using pmcmd command , and not to mention Autosys and Informatica are on different servers, there might be way you can directly call Workflows from Autosys but that will make things complicated when you're working at large scale calling 1000s of workflows, Instead having a generic script to call pmcmd which can utilised by multiple workflows seems an easier option.
All Autosys does is "run a command at a specified time" in this case. It's completely unaware of Informatica. It doesn't need to be on the same server as there simply is no communication between them.
All it needs, is the access to the test.sh script, wherever it is. And this, in turn, needs to be able to run the pmcmd utility. So in most basic setup, the Informatica >client< with the pmcmd could be on the same server with Autosys. Informatica Server just needs to be reachable to pmcmd.
I would suggest you to schedule the jobs using the in-built scheduler service,available from 10.x version. You don't have to even write a pmcmd command to trigger the workflow.

Scheduling PowerShell script in Azure

What is the best way of scheduling a PowerShell script in Azure? Should I create a VM and schedule it via a task scheduler. Or is there any better way?
I have a PowerShell script that I extracts data from audit log and reports some information. Thank you.
You should use Azure Automation. Its easy to use and you can run jobs for 500 minutes for free (every month).