report veritas Backupexec 16 list of servers with last succesful job associated - powershell

I am using the PowerShell module BEMCLI and I want to create a report with these columns: list of servers, Jobs associated to the server with the last successful run.
I can get the list of servers with: Get-BEAgentServer
I can get also the list of jobs in success in a period with:
Get-BEJobHistory -JobStatus Succeeded -FromStartTime (Get-Date).AddHours(-24) | ft -auto
Is there an easy way to get what I want?

Related

How to get nodename of running celery worker?

I want to shut down celery workers specifically. I was using app.control.broadcast('shutdown'); however, this shutdown all the workers; therefore, I would like to pass the destination parameter.
When I run ps -ef | grep celery, I can see the --hostname on the process.
I know that the format is {CELERYD_NODES}{NODENAME_SEP}{hostname} from the utility function nodename
destination = ''.join(['celery', # CELERYD_NODES defined at /etc/default/newfies-celeryd
'#', # from celery.utils.__init__ import NODENAME_SEP
socket.gethostname()])
Is there a helper function which returns the nodename? I don't want to create it myself since I don't want to hardcode the value.
I am not sure if that's what you're looking for, but with control.inspect you can get info about the workers, for example:
app = Celery('app_name', broker=...)
app.control.inspect().stats() # statistics per worker
app.control.inspect().registered() # registered tasks per each worker
app.control.inspect().active() # active workers/tasks
so basically you can get the list of workers from each one of them:
app.control.inspect().stats().keys()
app.control.inspect().registered().keys()
app.control.inspect().active().keys()
for example:
>>> app.control.inspect().registered().keys()
dict_keys(['worker1#my-host-name', 'worker2#my-host-name', ..])

How to refer previous task and stop the build in azure devops if there is no new data to publish an artifact

Getsolution.exe will give New data available or no new data available, if new data available then next jobs should be executed else nothing should be executed. How should i do it? (i am working on classic editor)
example: i have set of tasks, consider 4 tasks:
task-1: builds the solution
task-2: runs the Getstatus.exe which get the status of data available or no data available
task-3: i should be able to use the above task and make a condition/use some api query and to proceed to publish an artifact if data is available if not cleanly break out of the task and stop the build. it Shouldn't proceed to publish artifact or move to the next available task
task-4:publish artifact
First what you need is to set a variable in your task where you run Getstatus.exe:
and then set condition in next tasks:
If you set doThing to different valu than Yes you will get this:
How to refer previous task and stop the build in azure devops if there is no new data to publish an artifact
Since we need to execute different task based on the different results of Getstatus.exe running, we need set the condition based on the result of Getstatus.exe running.
To resolve this, just like the Krzysztof Madej said, we could set variable(s) based on the return value of Getstatus.exe in the inline powershell task:
$dataAvailable= $(The value of the `Getstatus.exe`)
if ($dataAvailable -eq "True")
{
Write-Host ("##vso[task.setvariable variable=Status]Yes")
}
elseif ($dataAvailable -eq "False")
{
Write-Host ("##vso[task.setvariable variable=Status]No")
}
Then set the different condition for next task:
You could check the document Specify conditions for some more details.

Data Driven testing with cucumber protractor

Lets say I have a scenario in my demo.feature file
Scenario Outline: Gather and load all submenus
Given I will login using <username> and <password>
When I will click all links
Examples :
| username | password |
| user1 | pass1 |
| use2 | pass2 |
lets say i have a file called users.json
How can i get those usernames and passwords from that external file to my demo.feature ?
Can I catch the file by passing parameters to my npm script like below ?
npm run cucumber -- --params.environment.file=usernames.json
I recommend having the login step access that json file within the step definition. Just make sure not to check it into the repo and instead always expect it to be in a location but only locally and not in the repository.
Doing the above is useful for a couple of reasons:
- An engineer running your tests does not need to know that a param must be passed in from the command line
- The code is self-descriptive in that step as to how it logs in
- You can add better error handling
- You can use multiple user files if needs be by having hooks define paths etc based on tags

Unable to automate the migration process using Task Scheduler and SharePoint cmdlet “MigrateUserAccount”

Unable to automate the migration process using Task Scheduler and SharePoint cmdlet “MigrateUserAccount” getting error “You cannot call a method on a null-valued expression”
$spFarm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$spFarm.MigrateUserAccount("$from\$name", "$to\$name", $false)
When I run the PowerShell script using the “SharePoint 2010 Management Shell” it is running and the output is successful, but when I configured the PowerShell script in Task scheduler the script is running but it throws error like “You cannot call a method on a null-valued expression”
Below screenshot displays that task scheduler is running in high privileges.
enter image description here
enter image description here
And this task has been created using the service account who administration access to this servers and added to “db_owners” in sqldatabase aswell.
Server Architecture
Web Front End 1
Web Front End 2
Application Server 1
Application Server 2
Database Cluster Node1
Database Cluster Node2
If this is all on one line...
$spFarm = [Microsoft.SharePoint.Administration.SPFarm]::Local $spFarm.MigrateUserAccount("$from\$name", "$to\$name", $false)
...then $spFarm will not have been defined when the MigrateUserAccount function is invoked.
You'll either need to put a semicolon between the two statements, or put them on separate lines like so:
$spFarm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$spFarm.MigrateUserAccount("$from\$name", "$to\$name", $false)

Waiting for a new deployment to fully initialize before swap the staging/production slot (swap VIP)?

I use the following code to swap my newly deployed application from the staging slot into the production slot (swap VIP):
Get-HostedService -serviceName $serviceName -subscriptionId $subcription -certificate $certificate | Get-Deployment -slot staging | Move-Deployment |Get-OperationStatus –WaitToComplete
I thought that the -WaitToComplete flag would make sure all VMs have fully initialize before doing the swap however it doesn't and it performs the swap at which time the newly deployed application in the production slot is still initializing and is unavailable for about 5/10min while it initializes fully.
What is the best way to make sure that the application is fully initialized before doing the Swap VIP operation?
This PowerShell snippet will wait until every instance is ready (building on the answer #astaykov gave).
It queries the state of the running instances in the staging slot, and only if all are showing as 'ready' will it leave the loop.
$hostedService = "YOUR_SERVICE_NAME"
do {
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService `
-Slot Staging `
-InstanceDetails).InstanceStatus
# total number of instances
$total = $list.Length
# count the number of ready instances
$ready = ($list | Where-Object { $_ -eq "ReadyRole" }).Length
Write-Host "$ready out of $total are ready"
$notReady = ($ready -ne $total)
If ($notReady) {
Start-Sleep -s 10
}
}
while ($notReady)
I am guessing that what you might actually be seeing is the delay that it takes for the DNS entries to be propagated and become available.
What you should find is that once the status is reported as Ready you may not be able to access your site using the staging URL "http://.cloudapp.net" you will find that it might not come up... but if you look on the Management Portal you will see at the bottom of the Properties a value for 'VIP' - if you use that IP address "http://xxx.xxx.xxx.xxx you should be able to get to your site.
When you do a SWAP you will find similar behavior. It will take some time for the DNS updates to propagate, but you will likely see that you can still access the site with either the IP address or the staging address (if it has become available).
Finally, 1 question... based on your question it sounds like you might be deploying to staging as part of your build then immediately promoting to a production deployment... is this correct, and if so why not just deploy to the production deployment? (I'm not suggesting that deploying directly into production is a best practice... but if that is your workflow I see no benefit to the temporary deployment to staging)
Hope this helps!
I am not very familiar with PowerShell, but from my experience with shells at all you are pipelining commands. Each set before a pipe charachter (|) represents a single command which would pass its result to the next command in the pipe (command after the pipe character). And because you are executing these commands before the depolyment is fully complete, that's why you get the newly deployed app swapped to the production slot.
First thing to note here is that you have "-WaitToComplete" argument just for the last command, which is actually Get-OperationStatus.
Other thing that I see is that this powershell commands will just do the vip swap. What about deployment?
From what you descriped it appears that your build server is auto deploying to staging, and you have post-build event that executes the swap script. What Mike Erickson suggests here would make sense, if your flow is like that - immediately swap after depoloy to staging. Why would you deploy to staging, if you are going to make a swap without checking application health first? However I would not recommend direct depolyment to the server (delete + deploy), but a service upgrade. Because when we do service upgrade, our deployment keeps its public IP address. If we delete + deploy we get a new public IP address. And the public IP address for a hosted service is already guaranteed to not be changed untill deployment is deleted.
Finally, you shall expand your PowerShell script a bit. First include a routine which will check (and wait untill) the staging slot to be "ready", and then perform the swap. As I said, I'm not much into powershell, but I'm sure this is feasible.
Just my 2 cents.
UPDATE
After revisiting this guide, I now understand something. You are waiting for operation to complete, but this is the VIP-SWAP operation which you are waiting to complete. If your stating deployment is not yet ready, you have to wait for it to become ready. And also like Mike mentioned, there might be DNS delay, which is noted at the end of the guide:
Note:
If you visit the production site shortly after its promotion, the DNS
name might not be ready. If you encounter a DNS error (404), wait a
few minutes and try again. Keep in mind that Windows Azure creates DNS
name entries dynamically and that the changes might take few minutes
to propagate.
UPDATE 2
Well, you will have to query for all the roles and all of their instances and wait for all of them to be ready. Technical you could conduct the VIP swap with at least one instance per role being ready, but I think that would complicate the script even more.
Here's a minor tweak to Richard Astbury's example above that will retry a limited number of times. All credit to him for original sample code, so I'd vote for him as the answer most to the point. Simply posting this variation here as an alternative for people to copy/paste as needed:
$hostedService = "YOUR_SERVICE_NAME"
# Wait roughly 10 minutes, plus time required for Azure methods
$remainingTries = 6 * 10
do {
$ready=0
$total=0
$remainingTries--
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService -Slot Staging -InstanceDetails).InstanceStatus
# count the number of ready instances
$list | foreach-object { IF ($_ -eq "ReadyRole") { $ready++ } }
# count the number in total
$list | foreach-object { $total++ }
"$ready out of $total are ready"
if (($ready -ne $total) -and ($remainingTries -gt 0)) {
# Not all ready, so sleep for 10 seconds before trying again
Start-Sleep -s 10
}
else {
if ($ready -ne $total) {
throw "Timed out while waiting for service to be ready: $hostedService"
}
break;
}
}
while ($true)