I need help from PowerShell Expert to achieve the requirement which I have.
We are using PowerShell Automation Script to upgrade Power BI Gateways and we have around 160 VMs. And the Process is performed sequentially meaning next machine cannot be upgraded until the first Gateway upgrade is completed.
And looking for enhancing the script to upgrade all the Gateway Machines Parallelly.
I did some high level research and came across background job cmdlets that adds the task/job in background. And Currently, I have 2 VMs that can be used for testing to see if we can achieve the requirement.
Appreciate if any lead on the same, and I only have high-level of knowledge on Powershell.
Thanks
I was trying to test with two Gateways to upgrade with March Version.
So First Gateway should get executed and the other should be added in the background.
In PowerShell - we have background Job cmdlets like Start-Job & Receive-Job that Add Jobs in Background and Display the status of Jobs added.
Can any PowerShell Expert help me tackle this requirement.
Thanks
Related
I have a group of interdependent .ps1 scripts I want to run in Azure (trying to set up continuous deployment with git, Pester unit tests, etc., as outlined in this blog). How can I run these scripts in azure without needing to manage a server on which those scripts can run? E.g., can I put them in a storage account and execute them there, or do something similar?
Using an Azure automation account/runbook seems to be limited to a single script per runbook (granted, you can use modules, which is insufficient in my case).
Note that I need to use PowerShell version 5+ (I noticed Azure web apps and functions only have 4.x.)
Thanks in advance!
You were on the right track with Azure Functions. However, given that you need v5+ of PowerShell, you may want to look at Azure Container Instances (ACI) instead. It's a little different approach (via containers), but should not impose any limitations and will free you from having to manage a virtual machine.
Note: At this time ACI is in preview. Documentation is available here.
There is a PowerShell container image available on Docker Hub that you could start with. To execute multiple scripts in the container, you can override CMD in the docker file.
I am able to create Azure VM using powershell.
I have to create 4 VM's parallel.
Does any feature in powershell to do create multiple VMs parallel ? Something like background jobs or call the same function for all different VMs using threads kind of ?
Have you considered VM Scale Sets? They automatically deploy VMs in parallel in a highly available configuration and make managing those VMs much easier (overview doc here: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-overview). You can of course deploy a scale set or a bunch of VMs from powershell (doc for deploying a scale set via powershell here: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-create-vmss), but the Powershell commandlets require you to specify lots of related properties (e.g. virtual network, subnet, load balancer configs, etc.). The Azure CLI 2.0 (which you can use on both Windows and Linux!) gives lots of good defaults. For instance, in Azure CLI 2.0 you can do this single command to create all of your VMs in parallel:
az vmss create --resource-group vmss-test-1 --name MyScaleSet --image UbuntuLTS --authentication-type password --admin-username azureuser --admin-password P#ssw0rd! --instance-count 4
(taken from the documentation here: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-create#create-from-azure-cli)
Hope this helps! :)
No, there is no built-in Azure powershell cmdlets or features enabling you to do so. You can create your own routine for that. I'm using PS jobs for that.
You need to use Save-AzureRmContext and Import-AzureRmContext to authenticate powershell inside jobs or use any form of automated login.
Thanks all, I have solved my issue using PS workflow parallel and sequence features. Achieved it.
Informatica Workflow Scheduling with Autosys.
I am trying to understand more about the Informatica Workflow Scheduling with Autosys.
Assume I have an Informatica workflow wf_test and a UNIX script say test.sh with pmcmd command to run this workflow. Also, I wrote a JIL
(test.jil) for Autosys to schedule my test.sh. at daily 10:00 PM.
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Can anyone shed some light about the communication between Autosys and Informatica?
Do we need to have both Informatica and Autosys server installed on the same server?
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Additionally, can we directly give informatica details to Autosys without any script?
Many Thanks
aks
How exactly Autosys kick-off workflow wf_test at the specified schedule?
Autosys is a scheduling tool. An autosys job keep checking every 5 seconds, if any job is scheduled to run, based on the jil. When the time comes and the condition satisfied, it will run the given command on the given host. It could be a pmcmd command or any shell script.
Can anyone shed some light about the communication between Autosys and Informatica?
The communication should be between Autosys Server and the server where Informatica is installed. Read this article. Additionally check if your autosys engineering team on steps to implement the same in your project/environment.
Do we need to have both Informatica and Autosys server installed on the same server?
Definately not. It should be separated. But the connectivity should be established.
Is there any agent or service needs be present in-between Autosys and Informatica to happen this possible?
Yes, Read the article given in point 2.
Additionally, can we directly give informatica details to Autosys without any script?
Yes. You can mentioned the whole pmcmd command.
As Autosys is scheduling tool , it will trigger command at specified time mentioned in the Job jil , the important part here is , we also mention the machine name where we want to execute that particular command.
So to answer your question, Autosys and Informatica can be on different servers , provided Autosys agent is configured on Informatica server and the Informatica machine/server details are configured in Autosys.(its like creating a machine on Autosys similiar to creating Global variable or a Job)
As we are running our workflows through shell scripts using pmcmd command , and not to mention Autosys and Informatica are on different servers, there might be way you can directly call Workflows from Autosys but that will make things complicated when you're working at large scale calling 1000s of workflows, Instead having a generic script to call pmcmd which can utilised by multiple workflows seems an easier option.
All Autosys does is "run a command at a specified time" in this case. It's completely unaware of Informatica. It doesn't need to be on the same server as there simply is no communication between them.
All it needs, is the access to the test.sh script, wherever it is. And this, in turn, needs to be able to run the pmcmd utility. So in most basic setup, the Informatica >client< with the pmcmd could be on the same server with Autosys. Informatica Server just needs to be reachable to pmcmd.
I would suggest you to schedule the jobs using the in-built scheduler service,available from 10.x version. You don't have to even write a pmcmd command to trigger the workflow.
I'm testing a workflow runbook that utilizes Add-Type to add some custom C# code.
All of a sudden I started getting 'type already exists' errors on subsequent test jobs, as if a new PSSession is not being created.
In other words, it looks like new jobs are sharing the same execution context. I only get this locally if I try to run the same command twice per PS instance.
The type in question is a static class with some Extension methods. Since it also happens to be the first type declared in the source block, I don't doubt other non-static types would throw errors as well.
I've executed this handfuls of times already, so I fully expect that 'eventually' this will stop happening, but I can't seem to force it, and I have no idea what I could've done to trip it into this situation, either.
Seeing evidence of shared execution contexts across jobs like this - even (especially?) if only temporal - makes me wonder if some or all of the general execution inconsistencies we've seen in the past when making & deploying changes & performing subsequent tests soon-after are related to this.
I'm tempted to think that this is simply a part of the difference between a Test Job and a 'real' one, but that raises questions about the validity of the Test jobs themselves WRT mimicking Published Jobs.
Are all Azure Automation Jobs supposed to execute in Isolation? Can this be controlled/exploited by a developer?
Each automation account has its own isolated sandboxes where its jobs run. Those sandboxes are distributed among a number of worker machines. For test jobs, to try to improve job start time since [make code change, retest] over and over is very common, Automation reuses the same sandbox as used for previous test jobs of this runbook, if the sandbox has not been cleaned up yet, so that sandboxes do not have to be spun up for each unique test job (sandbox creation is one reason for a longer job start time than desired). Due to this behavior, if you execute test jobs of the same runbook within a short amount of time, you will get the behavior you're seeing above.
However, even for production jobs, jobs of the same automation account (across runbooks) can share the same sandboxes. We randomly distribute jobs across our worker machines, so its possible job A is queued for execution and is placed on worker W, then 5 minutes later, job B is queued for execution and is placed on worker W as well. If job A and job B are of the same automation account and have the same "demands" in terms of modules / module versions, they will be placed in the same sandbox, if job A's sandbox is still around. "Module / module version demands" does not mean the modules used by the runbook, but the modules / latest module versions that existed in the automation account at the time when the job was started / runbook was scheduled (for jobs started via schedule) / runbook was assigned to a webhook (for jobs started via webhook)
In terms of resolving your specific problem, you could surround Add-Type with a try, catch statement, or maybe use Add-Type -IgnoreWarnings
When I try to start a process using Start-Process on Azure Automation, it doesn't run and remains Idle. Is it possible to run processes on Azure Automation?
There are 2 ways to run Azure Automation jobs - in Azure workers and in Hybrid workers on your premises. I suppose you are trying to run a job on a Hybrid worker, so you should be able to start a process on the machine. There are some security restrictions on the processes with GUI, so you will not be able to see GUI for your process, but for the process per-se - it will be created, I just tried it myself:
start-process
To get more information on running jobs in Azure, please refer to https://azure.microsoft.com/en-us/documentation/articles/automation-starting-a-runbook/
Detailed article about running jobs on Hybrid Workers is here: https://azure.microsoft.com/en-us/documentation/articles/automation-hybrid-runbook-worker/