I am working on Powershell scripts to do automated deployments to our servers behind our BIG-IP LTM.
I have simple scripts that use the iControl powershell cmdlets to disable and re-enable the nodes:
Disable-F5.LTMNodeAddress -Node xxx.xxx.xxx.xxx
These work quite well, however for this to become a truly automated process, what I need next is a way to query the Current Connections to the node as they bleed off so that my automation doesn't begin the deployment until current connections = 0.
I've tried the code here without any luck and gone down a few more rabbit holes that didn't get me what I need.
Hoping someone has tried this more recently and had better luck than I am
Thanks!
Found it.
https://devcentral.f5.com/questions/get-local-traffic-statistics-gt-nodes
`$ic = get-F5.iControl
$ic.LocalLBNodeAddress.get_statistics("NODE_IP") | %{$.statistics.statistics | ? {$.type -eq "STATISTIC_SERVER_SIDE_CURRENT_CONNECTIONS"} | %{$_.value.low} }`
Related
I am trying to use DSC push config inside of MDT, and currently running into a few issues. Here is kind of my thought process around this:
Because I am doing a "whole server configuration" for a pretty complex config, the LCM is going to require multiple reboots.
I want to make sure that I depend upon the LCM to communicate to MDT that a reboot is required, after which MDT will reboot the server and re-run the configuration.
I'd like for the configuration to continue in spite of any warnings or failures. The BDD.log will show what these are, so that the person provisioning the server can see what failed and either fix it in the code, or just fix it after the fact.
I think that there is some method of using $TSenv:SMSTSRetryRequested and $TSenv:SMSTSRebootRequested
What I've tried so far is something along these lines, however this didn't work:
if ((Get-DscConfigurationStatus).RebootRequested -eq $true) {
$TSenv:SMSTSRebootRequested = $true
$TSenv:SMSTSRetryRequested = $true
}
else {
$TSenv:SMSTSRebootRequested = $false
$TSenv:SMSTSRetryRequested = $false
}
Now what ends up happening here, is that the configuration goes as expected. On every reboot, it will check through existing resources and then start on new ones further down in the configuration, even if there are failures.
But for some reason, it seems that MDT thinks that a reboot is always requested, even if it gets all the way to the end of the configuration. Mind you, there always seem to be failures here and there during the DSC apply phase, so perhaps that's the problem? Otherwise, maybe the $TSenv variables are persisting through the reboots?
I also do know that there is a way inside of the DSC configuration to tell the LCM how to process reboots, but I'm not quite sure how to use that resource or where to put it, or if it will solve my problem. It's my next step, but I wanted to post here to see if anybody has any thoughts or has done something like this before.
I am using appcmd.exe to add IP addresses to the ipSecurity module of IIS. I am using a very basic Powershell script which reads a list from a web service of no more than 10-20 IPs, and I am adding those into the ipSecurity. I then run my Powershell through Task Scheduler, every 5 minutes to keep this list updated.
The command I am using via Powershell to add the IPs is
& $appcmd set config -section:system.webServer/security/ipSecurity /+"[ipAddress='$ip_address',subnetMask='$subnet_address',allowed='False']" /commit:apphost | Out-null
Perhaps it's important to say that on every execution of my Powershell, I am clearing first this list entirely using this command.
& $appcmd clear config /delete:true /section:system.webServer/security/ipSecurity /commit:apphost | Out-null
and then I add the new, updated IP list.
When I do this, I have noticed that the IIS service may drop. It does not happen always. But sometimes it does. When I stop the scheduler my IIS service works like a charm.
Any help? I couldn't find anything related at the Microsoft pages to be honest.
thank you
I'm automating windows updates for a set of SQL servers, mostly running on Windows Server 2016. Typically after you install updates you have to reboot, and there is a period of time after rebooting where the server is applying updates and users can't remote into the server. In my automation, I would like to wait until that period of time is over before reporting a successful update. Is there an indicator that I can check remotely through powershell that will determine whether a user can remote in?
I've checked the main RDP services (termservice, SessionEnv and UmRdpService) during this period and they are all running, so if there's some sort of indicator, it isn't them. Maybe there is a field somewhere that states that windows is applying updates? All of the servers are virtualized through VMWare if it matters.
Thanks for reading!
How about testing the port that the remote desktop service listens on?
test-netconnection server -port 3389
I didn't have any luck on ServerFault either, but I did eventually find a solution myself, posting here in case anyone finds this thread looking for help.
The isn't actually a service that changes states when you can RDP back into a server; that's probably determined somewhere in the windows code and there's no way you could find the flag. However, the TIWorker program runs after a reboot to install windows, and in my experience recently, when that exe completes, you can RDP 100% of the time, which is good enough for my automation.
I loop over this piece of code in 5 second intervals until it returns 0 rows, then finish.
Get-Process -ComputerName $server | ? {$_.ProcessName -match 'TiWorker'}
vFriends.
I have a very specific question to make:
I have a Datacenter with 4 clusters, formed by 14 big hosts and almost 500 VMs. So many VMs showed the need to collect info from them. I have a tool that collects the info through several powershell scripts by myself that connects to the VIServer. Here is an example:
add-pssnapin VMware.VimAutomation.Core Connect-VIServer
vCenterServer.mydomain.com -wa 0 Get-Stat -Realtime -MaxSamples 1
-Stat cpu.latency.average -Entity (Get-VMHost * | Get-VM * | Where-Object {$_.PowerState –eq "PoweredOn"}) | Select-Object
Entity,MetricId,Value | format-table
This one gets the last latency average read of all VMs. There are many others.
It has always worked like a charm, and I have more than 6 months of history and a good source for new investments and managerial decision making.
Until the VCB back up tool started to use a similar way to get info to perform back ups. When my tool is running, the back up never starts. I tried to install the PowerCLI in another server and try to collect from there, but it turned out to be painfully slow to retrieve the data (yes, I disabled the certificate check too) averaged in 5 minutes, compared to the 30sec from inside the vCenter.
OBS: vRealize doesn't give me the info I need. VMTurbo does, but it's too expensive to be bought by now.
Then, I have 3 alternatives that I thought of:
Use the other server and lose 450% of the current data sampling
Ask the BackUp analyst to stop my scripts to perform the back up everytime (causing another big gap in the collected data)
Install another vCenter server in order to run my scripts OR have the back up tool connect to it.
I don't actually want a vCenter to operate in the linked mode feature. I just want another vCenter for those listed purposes, just like an additional Active Directory server in a forest.
Is that possible?
Am I missing another good alternative?
How do I configure it?
Will a Plataform Services Controller server do the trick?
Thanks,
Dave
I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond