I would like to execute code that is specific to remote PSSessions. That is, the code doesn't apply locally, but applies to all remote sessions.
Is there any environment variable, function or cmdlet that would effectively return true if I'm in an active PSSession and false if I'm running locally?
Check if the $PSSenderInfo variable exists. From about_Automatic_Variables:
$PSSenderInfo
Contains information about the user who started the PSSession,
including the user identity and the time zone of the originating
computer. This variable is available only in PSSessions.
The $PSSenderInfo variable includes a user-configurable property,
ApplicationArguments, which, by default, contains only the
$PSVersionTable from the originating session. To add data to the
ApplicationArguments property, use the ApplicationArguments parameter
of the New-PSSessionOption cmdlet.
You could also test using this:
If ( (Test-Path variable:PSSenderInfo)
-and ($Null -ne $PSSenderInfo)
-and ($PSSenderInfo.GetType().Name -eq 'PSSenderInfo') ) {
Write-Host -Object "This script cannot be run within an active PSSession"
Exit
}
This was not my initial finding, but it helped with "variable PSSenderInfo not set" issue, if it doesn't exist.
Related
Recently my powershell scripts require to explicitly say which domain I want to connect to. Is it necessary to write this for each command? Or can I set it somehow once in the beginning of the script.
Instead of
Get-ADUser -Server server otherparameters
could I write in the beginning something like
Set-default server to connect to
?
Is it necessary to write this for each command?
No!
You can specify a default parameter value for a parameter belonging to one or more cmdlets by assigning it to the $PSDefaultParameterValues automatic variable:
$PSDefaultParameterValues['*-AD*:Server'] = 'mydc.mydomain.tld'
Any cmdlet you subsequently invoke that matches the *-AD* pattern by name and has a -Server parameter will now implicitly have 'mydc.mydomain.tld' bound to the -Server parameter unless an argument is explicitly passed.
In other words: next time you invoke Get-ADUser rsterba, PowerShell now calls Get-ADUser rsterba -Server 'mydc.mydomain.tld' instead.
For more information about $PSDefaultParameterValues and how it works, see the about_Parameters_Default_Values help topic
What is the difference between PowerShell's $env:username and [environment]::username and why would they potentially return different users? (I understand there are other ways to get the current user as well)
Some background:
I have an Azure Pipeline that runs a PowerShell script on a release target. The pipeline agent is configured to run under a specific service account. Part of that PowerShell script uses $env:username to assign permissions. However, permissions are assigned to the local admin account instead. If I change the script to use [environment]::username the correct service account user is given permissions.
$env:USERNAME, while predefined to reflect the current user's username, is a read-write environment variable, just like any other.
Even though it's obviously inadvisable to do so, a statement such as $env:USERNAME = 'foo' changes the value of environment variable USERNAME for the current process as well as its child processes.
This means that if environment variable USERNAME was modified earlier in the same session or in PowerShell's parent process, possibly via an explicitly specified startup environment, it no longer reflects the true username.
While I don't know why the USERNAME environment variable would differ from the real account in the case of Azure Pipelines, an example of - inadvertently - setting the wrong value is PowerShell's Start-Process cmdlet when given the -UseNewEnvironment switch: due to a bug as of PowerShell Core 7.0.0-preview.5 / Windows PowerShell v5.1, $env:USERNAME unexpectedly always reflects SYSTEM, irrespective of the actual user account - see this GitHub issue.
By contrast, [Environment]::UserName uses a different method for obtaining the current user's username, which does not rely on the value of $env:USERNAME and always reflects the true username[1].
In short: Only [Environment]::UserName reliably reflects the current user account's username.
[1] From the linked docs: "On Windows the UserName property wraps a call to the Windows GetUserName function. The domain account credentials for a user are formatted as the user's domain name, the \ character, and user name. Use the UserDomainName property to obtain the user's domain name and the UserName property to obtain the user name.
On Unix platforms the UserName property wraps a call to the getpwuid_r function.
$env: is looking at your environmental variables. The variable that you're looking at- username- happens to be set to the local username by default. But it doesn't have to be.
[Environment] is a call to the System.Environment class in .NET Framework. This class has a property called UserName that contains the username that is logged into the computer.
Similiar names, but very different things. You could get the path statement by $env:Path, for example, or call [Environment]::Version to get the version of .NET Framework.
Note in this example I deliberately messed with the local environmental variable prior to launching PowerShell to give the wrong username.
C:\WINDOWS\system32>echo %username%
mspow
C:\WINDOWS\system32>set username=bob
C:\WINDOWS\system32>echo %username%
bob
C:\WINDOWS\system32>powershell
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
PS C:\WINDOWS\system32> echo $env:username
bob
PS C:\WINDOWS\system32> [Environment]::Username
mspow
PS C:\WINDOWS\system32> echo $env:ProgramFiles
C:\Program Files
PS C:\WINDOWS\system32> echo $env:path
C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\dotnet\;C:\Users\mspow\AppData\Local\Microsoft\WindowsApps
PS C:\WINDOWS\system32> [Environment]::Version
Major Minor Build Revision
----- ----- ----- --------
4 0 30319 42000
You can see all of the available environment variables by using
Get-Item -Path Env:
The only way I know of to see all of the Methods and Properties for [Environment] is the link to the MSDN site. Get-Member doesn't work on it.
I'm using TFS 2017.1 Builds and Release feature.
In my release definition, I have a couple of release variables which I need to refer in my PowerShell task (execute on remote machine). So far, I've tried the following options but couldn't get it to work.
Added an Execute PowerShell task to store release variables into Environment variables:
$releaseVaraiables = Get-ChildItem Env: | Where-Object { $_.Name -like "ACL_*" }
Write-Host "##vso[task.setvariable variable=aclVariables]$releaseVaraiables"
Added an Execute PowerShell on remote machine task:
Here I can't read the Environment variables (maybe because this is remote machine task?)
Write-Verbose "problem reading $env:aclVariables" -Verbose
Then I tried passing the environment variable as an argument, but that didn't work either
param
(
$RbacAccessTokenParams
)
$RbacAccessTokenParams.GetEnumerator() | % {$_.Name}
$RbacAccessTokenParams | % {
Write-Verbose "variable is $_" -Verbose
Write-Verbose "name is $_.Name" -Verbose
Write-Verbose "value is $_.Value" -Verbose
}
This is how I passed as argument
-RbacAccessTokenParams $(aclVariables)
What I'm missing here?
I've tested your scenario on my side with TFS 2017.3.1, and it works when pass the environment variable as an argument. You can upgrade your TFS first and try again. Attach my steps for your reference:
1.
2.
3.
4.
Non-secret variables are already stored as environment variables; you do not need to do anything special to access them. You can access them with $ENV:VariableName. Periods are replaced with underscores. So Foo.Bar would be $env:FOO_BAR.
Secret variables should be passed in to the script that requires them.
However, this only applies on the agent. If you're using the PowerShell On Target Machines task to run a script, you need to pass the variables as arguments to the script. There is no way around this, unless you choose to use deployment groups.
Or, better yet, follow a configuration-as-code convention and store application-specific values in source controlled configuration files that your scripts read, so that you are not tightly coupled to your deployment orchestration platform.
I've made a script to automatically change and/or create the default Outlook signature of all the employees in my company.
Technically, it gets the environment variable username where the script is deployed, access to the staff database to get some information regarding this user, then create the 3 different files for the signature by replacing values inside linked docx templates. Quite easy and logical.
After different tests, it is working correctly when you launch the script directly on a computer, either by using Powershell ISE, directly by the CMD or in Visual Studio. But when we tried to deploy it, like it will be, by using SCCM, it can't get any environment variable.
Do any of you have an idea about how to get environment variables in a script when it is deployed by SCCM ?
Here is what I've already tried :
$Name = [Environment]::UserName
$EnvVarUserName = Get-Item Env:\USERNAME
Even stuff like this :
$proc = gwmi win32_process -Filter "Name = 'explorer.exe'"
$report = #()
ForEach ($p in $proc)
{
$temp = "" | Select User
$temp.user = ($p.GetOwner()).User
$report += $temp
}
Thanks in advance and have a nice day y'all !
[EDIT]:
I've found a way of doing this, not the best one, but it works. I get the name of the machine, check the DB where when a laptop is connected to our network it stores the user id and the machine, then get the info in the staff DB.
I will still check for Matt's idea which is pretty interesting and, in a way, more accurate.
Thank you all !
How are you calling the environmental variable? $Env:computernamehas worked for me in scripts pushed out via SCCM before.
Why don't you enumerate the "%SystemDrive%\Users" folder, exclude certain built-in accounts, and handle them all in one batch?
To use the UserName environment variable the script would have to run as the logged-in user, which also implies that all of your users have at least read access to your staff database, which, at least in our environment, would be a big no-no.
I am trying to use the active directory PowerShell module inside a classification rule in File server resource manager on windows server 2012 R2.
When I try to just perform:
Import-Module ActiveDirectory
It will crash (I assume) and not update the classification property anymore.
I tried setting the script parameter -ExecutionPolicy Unrestricted, but that didn't help.
Anyone know how to get it to work?
non working code:
# Global variables available:
# $ModuleDefinition (IFsrmPipelineModuleDefinition)
# $Rule (IFsrmClassificationRule)
# $PropertyDefinition (IFsrmPropertyDefinition)
#
# And (optionally) any parameters you provide in the Script parameters box below,
# i.e. "$a = 1; $b = 2" . The string you enter is treated as a script and executed so the
# variables you define become globally available
# optional function to specify when the behavior of this script was last modified
# if it consumes additional files. emit one value of type DateTime
#
# function LastModified
# {
# }
# required function that outputs a value to be assigned to the specified property for each file classified
# emitting no value is allowed, which causes no value to be assigned for the property
# emitting more than one value will result in errors during classification
# begin and end are optional; process is required
#
function GetPropertyValueToApply
{
# this parameter is of type IFsrmPropertyBag
# it also has an additional method, GetStream, which returns a IO.Stream object to use for
# reading the contents of the file. Make sure to close the stream after you are done reading
# from the file
param
(
[Parameter(Position = 0)] $PropertyBag
)
process
{
Import-Module activedirectory
$users = Get-ADUser -filter * -Properties SID,Department
return "dummy result";
}
}
As note: This works perfectly fine in a PowerShell console; that isn't the issue. It's running the code as classifier for the file server resource manager.
Worked my way around it now by just creating a CSV file with the result of the Get-ADUser and loading that inside the script for now (so I don't require any non standard modules). But it would be nicer to just run this without a dependency on some external task.
The classification script is executed from a the File Server Resource Manager Service (not from the UI you are looking at), which is running under the system account.
So you either need to modify under which account the service is running or give the account rights to access the objects you require to access. In my case Active Directory.