I'm going to be traveling for the next month, and I'd like to automate the VPN connection process so that on X event, the script fires and automatically connects me. I've already configured the [L2TP/IPSec] VPN connection in ms-settings:network-vpn & verified it works, but it's automation step that's proving problematic.
Windows GUI: The credentials have been saved.
PowerShell: The RememberCredential property is set to True
VBScript: Curiously, the VPN connection is hidden:
Dim oShell : Set oShell = CreateObject("Shell.Application")
Dim NetConn : Set NetConn = oShell.Namespace(49)
Dim Connections : Set Connections = NetConn.Items
wscript.echo "Connection Count [" & Connections.Count & "]"
For i = 0 to Connections.Count - 1
wscript.echo "Connections.Item(" & i & ").Name: [" & Connections.Item(i).Name & "]"
next
rasdial <entry>: Expectedly returns error 691.
rasphone -d <entry>: Displays the Connection dialog whereas I'd prefer it to just connect automatically and hidden.
Is this even possible in Windows 10? Or am I just overlooking some small yet key detail?
I ended up leveraging Add-VpnConnectionTriggerApplication to trigger an automatic connection of the VPN on the launch of specific executables/UWP applications. The downside is that when doing this, PoSh warns that SplitTunneling must be enabled which is less than ideal.
However after playing around with it for a while (just 2 or so hours now) to ensure the VPN keys off specific executables/UWP's, I ended up disabling SplitTunneling and, paradoxically, it appears to continue working as I would hope/expect. I rebooted a few times, logged on and sure enough by the time the desktop loaded the VPN had been established.
I need to do more testing to confirm, but this is sufficient to help save me from myself.
I do this by checking the Remember my sign-in info checkbox when I created the VPN connection.
You can check this in your PowerShell script by ensuring that Get-VpnConnection returns RememberCredential : True.
If this is the case, then rasdial should automatically connect it.
I do it with this:
<#
.SYNOPSIS
Ensures vpn connection (assumed to have saved credentials) is connected.
#>
function Connect-Vpn
{
[CmdletBinding()]
param (
[object]
$Settings
)
$rr1 = Get-VpnConnection -Verbose:$false | where {$_.ServerAddress -imatch $Settings.VpnConnectionPattern -and $_.RememberCredential} | Select -First 1
if ($rr1.ConnectionStatus -ne 'Connected')
{
rasdial.exe $rr1.Name
If (-not $LASTEXITCODE)
{
throw "Cannot connect to '$($rr1.Name)'."
}
}
else
{
Write-Verbose "Already connected to '$($rr1.Name)'."
}
}
You will have to massage this code to your needs as this uses some fields from my settings file...
Related
Its obvious that we need to import the data from a data source into a model of SSAS tabular.
Imagine we have two data sources connections for two different environments ENV1 and ENV2. Both environment contains same tables but with different data.
Is it possible if I want to switch to ENV2 while I am working on ENV1 in SSAS tabular. Is there any alternative available for this requirement?
Thanks in advance,
Lalith Varanasi.
It sounds like you want to have one data source, but to update the connection string based on the environment you deploy to.
I have built a CI/CD process for our tabular models which uses the TOM library in
a powershell script to read the .bim file, modify the connection strings based on the environment we are deploying to, create partitions as needed as well as the administrative roles. I can't share the full script at the moment because there are a few references specific to my company, but basically:
try{
Write-Log "loading Microsoft.AnalysisServices assemblies that we need" Debug
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.Core") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.Tabular") | Out-Null
}
catch{
Write-Log "Could not load the needed assemblies... TODO: Figure out and document how to install the needed assemblies. (I would start with the SQL feature pack)" Error -ErrorAction Stop
}
$modelBim = [IO.File]::ReadAllText($bimFilePath)
$db = [Microsoft.AnalysisServices.Tabular.JsonSerializer]::DeserializeDatabase($ModelBim)
#Our DEV and TEST models get deployed on the same SSAS instance. We have to modify the name of the model to reference which environment they are reading from.
$db.ID = "$($modelName)_$($TargetEnvironment)"
$db.Name = "$($modelName)_$($TargetEnvironment)"
Write-host "Updating the data source connections to use the $TargetEnvironment environment."
foreach ($ds in $db.Model.Model.DataSources){
Write-Host "Updating connection information for the $($ds.Name) connection" Debug
#I use a Powershell function called Get-DBServerFromEnvironment which we use to pull the correct server name for each of our different databases. Our database names are the same in each environment, except that they are prefixed with the environment name.
#Using this design, DB1,DB2,DB3 is the name of the data source (Say ApplicationOLAP,DataWarehouse,ThirdPartDB) and you set enviornment specific connection strings in a seperate custom function so that the logic is stored in one place
switch($ds.Name ){
"DB1"{$ds.ConnectionString = "EnvironmentSpecificConnectionStringToDB1"}
"DB2"{$ds.ConnectionString = "EnvironmentSpecificConnectionStringToDB2"}
"DB3"{$ds.ConnectionString = "EnvironmentSpecificConnectionStringToDB3"}
"DB4"{$ds.ConnectionString = "EnvironmentSpecificConnectionStringToDB4"}
default{Write-Log "Unknown Data source name" Warning}
}
}
$server = New-Object Microsoft.AnalysisServices.Tabular.Server
#$serverName is the SSAS server, I get this by calling a custom function and specifying the target enviornment.
$server.Connect($serverName)
$server.BeginTransaction()
if ($server.Databases.Contains($db.ID)){
Write-Log "Tabular database with the ID: $($db.ID) exists. Dropping and recreating"
$server.Databases.FindByName($db.Name).Drop()
$server.Databases.Remove($db.ID, [System.Boolean]::TrueString)
$server.Databases.Add($db) | Out-Null
}
else{
Write-Log "Tabular database with the ID: $($db.ID) does not exist. Creating"
$server.Databases.Add($db) | Out-Null
}
#This part is where you are actually writing your changes to the server. modify as needed.
$db.Update( "ExpandFull")
$db.Model.RequestRefresh("Automatic")
$saveOptions = New-Object Microsoft.AnalysisServices.Tabular.SaveOptions
$saveOptions.MaxParallelism = 5
Write-Log "Starting the processing at [$([DateTime]::Now)]. The script will hang while the cube is processing."
$ProcessElapsed = [system.diagnostics.stopwatch]::startnew()
$result = $db.Model.SaveChanges($saveOptions)
$impact = $result.Impact
$xmlaResult = $result.XmlaResults
#TDOD: Check the result for success/failure.
Write-Log "Processing took $($ProcessElapsed.Elapsed.ToString()). Hours:Minutes:Seconds:Milliseconds"
$server.CommitTransaction()
On your BIM model you can change your datasource connection string.
Go to Model > Existing connection > Modify
or use your Tabular Explorer and change your datasource
-> You need to process your tables after this change.
Is it what you are searching?
Have a nice day,
Arnaud
If you're using Tabular Editor, there's a simple option to prevent connection strings from being deployed, in the "Deployment Wizard" under "Model" > "Deploy..."
By default, "Deploy Connections" is unchecked, meaning that the connection strings used on the target database will be left unchanged, regardless of what you're using in your development database.
I am trying to connect to Google for a quick check of the Internet availability and host response. If the check returns 200, move on to next script.
The script below works initially. However, when I try multiple times, specifically after 2 runs, PowerShell hangs and doesn't move forward.
If I restart PowerShell and run the script, it runs OK for two times then it hangs again.
I am planning to put this in the Scheduler and run it regularly.
What am I missing here? Can anyone of you advise?
# Once connection gets established, quick status test
$request = [System.Net.WebRequest]::Create("http://www.google.com")
try {
$response = $request.GetResponse()
} catch {
if ($error) {
# write some error on the log
}
}
# if response returns 200, proceed next step, else create critical log
if ($response.StatusCode.value__ -eq 200) {
& ./hostCheck.ps1 ; # Start host Check
}
I wrote a powershell script that connects to a remote machine with the intent of executing a software rollout on said machine. Basically it connects, maps a drive, copies the rollout from the mapped drive to the target machine, then executes a perl script to install the rollout. If I do those steps manually everything works fine. When I try using my script, the perl script fails on the remote machine saying, "The paging file is too small for this operation to complete".
Can someone explain the considerations I need to take into account when operating remotely? I've tried monitoring memory usage and I don't see anything out of the ordinary. Is the page file OS wide or is there some type of per user configuration my script should be setting when it connects?
I can post snippets of my script if needed, but the script is 426 lines so I think it would be overwhelming to post in its entirety.
I found that the remote shells are managed differently than logging onto the box and executing a powershell session. I had to increase the maximum amount of memory available using one of the commands below:
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 1024
winrm set winrm/config #{MaxMemoryPerShellMB="1024"}
The default is 150MB which didn't cut it in my case. I can't say that I recommend 1GB, I'm just a developer. I tried upping it until I found what worked for me.
I tried this code to run the puppet client as an administrator but the framework still complains with "Access Denied"
Exe (C:\Users\lmo0\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. .
using System;
using System.Diagnostics;
namespace RunAsAdmin
{
class Program
{
static void Main(string[] args)
{
Process proc = new Process();
Process p = new Process();
p.StartInfo.FileName = #"powershell.exe";
p.StartInfo.Arguments = #"invoke-command -computername vavt-pmo-sbx24 -ScriptBlock {&'C:\Program Files (x86)\Puppet Labs\Puppet\bin\puppet.bat' agent --test --no-daemonize --verbose --logdest console}";
p.StartInfo.Verb = "runas";
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
p.Start();
while (p.HasExited == false) {
Console.WriteLine(p.StandardOutput.ReadLine());
}
Console.ReadLine();
p.WaitForExit();
p.Close();
}
}
}
I use the following code to swap my newly deployed application from the staging slot into the production slot (swap VIP):
Get-HostedService -serviceName $serviceName -subscriptionId $subcription -certificate $certificate | Get-Deployment -slot staging | Move-Deployment |Get-OperationStatus –WaitToComplete
I thought that the -WaitToComplete flag would make sure all VMs have fully initialize before doing the swap however it doesn't and it performs the swap at which time the newly deployed application in the production slot is still initializing and is unavailable for about 5/10min while it initializes fully.
What is the best way to make sure that the application is fully initialized before doing the Swap VIP operation?
This PowerShell snippet will wait until every instance is ready (building on the answer #astaykov gave).
It queries the state of the running instances in the staging slot, and only if all are showing as 'ready' will it leave the loop.
$hostedService = "YOUR_SERVICE_NAME"
do {
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService `
-Slot Staging `
-InstanceDetails).InstanceStatus
# total number of instances
$total = $list.Length
# count the number of ready instances
$ready = ($list | Where-Object { $_ -eq "ReadyRole" }).Length
Write-Host "$ready out of $total are ready"
$notReady = ($ready -ne $total)
If ($notReady) {
Start-Sleep -s 10
}
}
while ($notReady)
I am guessing that what you might actually be seeing is the delay that it takes for the DNS entries to be propagated and become available.
What you should find is that once the status is reported as Ready you may not be able to access your site using the staging URL "http://.cloudapp.net" you will find that it might not come up... but if you look on the Management Portal you will see at the bottom of the Properties a value for 'VIP' - if you use that IP address "http://xxx.xxx.xxx.xxx you should be able to get to your site.
When you do a SWAP you will find similar behavior. It will take some time for the DNS updates to propagate, but you will likely see that you can still access the site with either the IP address or the staging address (if it has become available).
Finally, 1 question... based on your question it sounds like you might be deploying to staging as part of your build then immediately promoting to a production deployment... is this correct, and if so why not just deploy to the production deployment? (I'm not suggesting that deploying directly into production is a best practice... but if that is your workflow I see no benefit to the temporary deployment to staging)
Hope this helps!
I am not very familiar with PowerShell, but from my experience with shells at all you are pipelining commands. Each set before a pipe charachter (|) represents a single command which would pass its result to the next command in the pipe (command after the pipe character). And because you are executing these commands before the depolyment is fully complete, that's why you get the newly deployed app swapped to the production slot.
First thing to note here is that you have "-WaitToComplete" argument just for the last command, which is actually Get-OperationStatus.
Other thing that I see is that this powershell commands will just do the vip swap. What about deployment?
From what you descriped it appears that your build server is auto deploying to staging, and you have post-build event that executes the swap script. What Mike Erickson suggests here would make sense, if your flow is like that - immediately swap after depoloy to staging. Why would you deploy to staging, if you are going to make a swap without checking application health first? However I would not recommend direct depolyment to the server (delete + deploy), but a service upgrade. Because when we do service upgrade, our deployment keeps its public IP address. If we delete + deploy we get a new public IP address. And the public IP address for a hosted service is already guaranteed to not be changed untill deployment is deleted.
Finally, you shall expand your PowerShell script a bit. First include a routine which will check (and wait untill) the staging slot to be "ready", and then perform the swap. As I said, I'm not much into powershell, but I'm sure this is feasible.
Just my 2 cents.
UPDATE
After revisiting this guide, I now understand something. You are waiting for operation to complete, but this is the VIP-SWAP operation which you are waiting to complete. If your stating deployment is not yet ready, you have to wait for it to become ready. And also like Mike mentioned, there might be DNS delay, which is noted at the end of the guide:
Note:
If you visit the production site shortly after its promotion, the DNS
name might not be ready. If you encounter a DNS error (404), wait a
few minutes and try again. Keep in mind that Windows Azure creates DNS
name entries dynamically and that the changes might take few minutes
to propagate.
UPDATE 2
Well, you will have to query for all the roles and all of their instances and wait for all of them to be ready. Technical you could conduct the VIP swap with at least one instance per role being ready, but I think that would complicate the script even more.
Here's a minor tweak to Richard Astbury's example above that will retry a limited number of times. All credit to him for original sample code, so I'd vote for him as the answer most to the point. Simply posting this variation here as an alternative for people to copy/paste as needed:
$hostedService = "YOUR_SERVICE_NAME"
# Wait roughly 10 minutes, plus time required for Azure methods
$remainingTries = 6 * 10
do {
$ready=0
$total=0
$remainingTries--
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService -Slot Staging -InstanceDetails).InstanceStatus
# count the number of ready instances
$list | foreach-object { IF ($_ -eq "ReadyRole") { $ready++ } }
# count the number in total
$list | foreach-object { $total++ }
"$ready out of $total are ready"
if (($ready -ne $total) -and ($remainingTries -gt 0)) {
# Not all ready, so sleep for 10 seconds before trying again
Start-Sleep -s 10
}
else {
if ($ready -ne $total) {
throw "Timed out while waiting for service to be ready: $hostedService"
}
break;
}
}
while ($true)
I have a task: files available over WebDAV on a remote server (SSL required) must be checked for whether they may have been updated recently, and if so copied to a local folder. There are a number of other actions that need to be performed after they arrive (copied to other folders, processed, etc.). The operating system I'm working from is Windows 2003 Server. I'd love to be able to use PowerShell for this task.
Naturally, I need to browse the files. I've looked tentatively at several solutions:
Trying to map a drive using "net use" (so far, I get a system 67 error)
Using a product like WebDrive to map a drive (as it happens, WebDrive and another utility on the server seem to conflict with each other for mysterious reasons)
Browse and manipulate the files by issuing http requests using the .NET HTTPWebRequest object hierarchy through PowerShell (works, but seems a bit complicated)
Purchase a commercial .NET assembly that simplifies working with WebDAV (ones that I've seen look pricey)
Have you needed to do something similar? Which approach is best? Any that I have missed? TIA.
It will work from powershell. Note this example:
http://thepowershellguy.com/blogs/posh/archive/2008/05/31/cd-into-sysinternals-tools-from-powershell.aspx
The problem is that the 'web client service' not running on the windows 2003 server (it's disabled by default).
The clue was the "System 67 error"
I confirmed this from a win2k3 server, starting the 'web client service' will get WebDAV working (and probably powershell). It will work out of the box on an XP client (service is running by default).
Let me know if this doesn't resolve it for you.
As an alternative to PowerShell, you could always do this from a WSH script. Example:
<job>
<reference object="ADODB.Connection"/>
<object id="cnIPP" progId="ADODB.Connection"/>
<object id="recDir" progId="ADODB.Record"/>
<script language="VBScript">
Option Explicit
Private waArgs
Private strSubDir
Private rsItems
Private strLine
Set waArgs = WScript.Arguments
If waArgs.Count < 3 Then
WScript.Echo "Parameters: FolderURL User PW [SubDir]"
WScript.Quit
End If
cnIPP.Open "Provider=MSDAIPP.DSO;Prompt=NoPrompt;" _
& "Connect Timeout=10;" _
& "Data Source=" & waArgs(0), _
waArgs(1), waArgs(2), adConnectUnspecified
If waArgs.Count = 4 Then
strSubDir = waArgs(3)
Else
strSubDir = vbNullString
End If
Set waArgs = Nothing
recDir.Open strSubDir, cnIPP, adModeRead, adFailIfNotExists, _
adDelayFetchFields Or adDelayFetchStream
Set rsItems = recDir.GetChildren()
With rsItems
WScript.Echo .Fields("RESOURCE_PARENTNAME").Value
Do Until .EOF
If .Fields("RESOURCE_ISCOLLECTION").Value Then
strLine = " [DIR] " & .Fields("RESOURCE_PARSENAME").Value
Else
strLine = " " _
& " " & .Fields("RESOURCE_PARSENAME").Value _
& " " & CStr(.Fields("RESOURCE_LASTWRITETIME").Value)
End If
WScript.Echo strLine
.MoveNext
Loop
.Close
End With
Set rsItems = Nothing
recDir.Close
cnIPP.Close
</script>
</job>
A sample run:
D:\Scripts>cscript WebDAV.wsf https://my.dav.com/~fred fred fredPW
Microsoft (R) Windows Script Host Version 5.7
Copyright (C) Microsoft Corporation. All rights reserved.
https://my.dav.com/~fred
junk.htm 2/26/2008 4:28:44 AM
test.log 3/30/2009 12:30:45 PM
[DIR] _private
[DIR] stuff
D:\Scripts>
This approach should work with both WebDAV and FrontPage enabled servers without change. The example defaults to protocol auto-negotiation.
To actually retrieve data you'd open an ADODB.Stream on an ADODB.Record opened on the non-directory item.