Trouble setting up MSMQ ACL using PowerShell cmdlet - powershell

My MSMQ queue gets created by PowerShell DSC engine. I can see queues created. Since DSC engine runs from SYSTEM account, then queue owner also gets set to SYSTEM.
When I try to set MSMQ ACL from PowerShell console I constantly get following error:
PS C:\Users\Administrator.DOMAIN> whoami; Get-MsmqQueue queue1 | Set-MsmqQueueACL -UserName "Everyone" -Allow FullControl
DOMAIN\administrator
Set-MsmqQueueACL : Failed to set security descriptor. Error code: 3222143013
At line:1 char:50
+ whoami; Get-MsmqQueue incredipay_atm_processor | Set-MsmqQueueACL -UserName "Eve ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidResult: (FullControl:MessageQueueAccessRights) [Set-MsmqQueueACL], Win32Exception
+ FullyQualifiedErrorId : Failed to set security descriptor. Error code: 3222143013,Microsoft.Msmq.PowerShell.Commands.SetMSMQQueueACLCommand
I also can't set MSMQ ACL using custom DSC resource, which is basically doing the same thing only from SYSTEM account.
So the question is are there any way to set MSMQ permissions from within PowerShell DSC engine using Set-MSMQQueueACL cmdlet. Or at least if I'll be able to solve previously mentioned mentioned error, then maybe I'll be able to solve also DSC problem.
I'm running Windows 2012 and WMF 4.0.
Thanks in advance.

I did something similar recently and hit the same problem. You have to take ownership of the queue first (admin rights required), and then you can change the permissions.
Try these manual steps in the Computer Management snap-in first to check it solves your error, and then work out how to reproduce it via PowerShell.
Start -> Run -> compmgmt.msc
Expand "Computer management (Local) -> Services and Applications -> Message Queuing -> Private Queues"
Right click -> Properties -> Security -> Advanced -> Owner -> Other users or groups...
Enter your user name (DOMAIN\administrator)
Click OK, then OK again
You should now be able to edit security via script
I ended up writing some PInvoke code to take ownership of the queue using C#, which I compiled on the fly with Add-Type in PowerShell. I can't share it unfortunately as it's proprietary, but this question might give you some pointers:
How do I set the owner of a message queue?
P.S. error code 3222143013 is 0xC00E0025, which translates to MQ_ERROR_ACCESS_DENIED (see http://msdn.microsoft.com/en-us/library/ms700106%28v=vs.85%29.aspx)

I've managed to overcome this issue by using following code in my custom DSC resource:
$ScriptBlock={
param(
[String] $QueueName,
[String] $Username,
[String[]] $MessageQueueAccessRight,
[ValidateSet("Allow","Deny")]
[String] $MessageQueueAccessType
)
$params = #{}
$queue = Get-MSMQQueue -Name $QueueName
$params.Add("InputObject",$queue)
$params.Add("Username",$Username)
switch ($MessageQueueAccessType)
{
"Allow" {$params.Add("Allow","$MessageQueueAccessRight"); Break;}
"Deny" {$params.Add("Deny","$MessageQueueAccessRight"); Break;}
}
Set-MsmqQueueACL #params
}
Foreach($MessageQueueAccessRight in $MessageQueueAccessRights)
{
Invoke-Command -ScriptBlock $ScriptBlock -ComputerName . -Credential $DomainAdministratorCredential -ArgumentList $QueueName,$Username,$MessageQueueAccessRight,$MessageQueueAccessType
}
Of course it's necessary to use the same approach when MSMQ queue gets created by DSC. So MSMQ queue creation should be made by the same account, whose initially going to adjust ACLs.

To do this in DSC, you can run your command using different credentials by having your custom DSC resource take a [PSCredential] parameter.
To do this securely requires some significant changes to your DSC infrastructure. See my answer to this question: https://serverfault.com/questions/632390/protecting-credentials-in-desired-state-configuration-using-certificates/#632836
If you just want to test before making those changes, you can tell DSC to allow storing your credentials in plaintext using PSDscAllowPlainTextPassword = $true in your configuration data (see here for details).

I also created a custom DSC resource to setup/modify my MSMQ queues within my web farm. Since DSC runs as SYSTEM you must ensure that the SYSTEM account has access to create/modify MSMQ's on the node.
There is a way to have DSC run as an account. If that is the case then you have to ensure that you are passing in that account when attempting to create/modify your MsmqQueue.
I understand I am responding to an old thread. But someone else in the near future may be facing the same issue and come across this thread.
Enjoy & Good Luck!

Related

Passing active powershell Session to background jobs

I am writing a powershell script to manipulate Exchange Online mailboxes.
I want this script to run with background jobs in parallel, so I'm trying to use PoshRSJobs (https://github.com/proxb/PoshRSJob) to create the jobs.
My code is:
Connect-ExchangeOnline -Credentials ...
Start-RSJob -ModulesToImport ExchangeOnlineManagement -Throttle $ProcesosConcurrentes -InputObject $jobs -ScriptBlock {
./migra_buzon.ps1 ...
}
Where:
$jobs is an arraylist where I have the parameter of the mailboxes I want to operate with
migra_buzon.ps1 is another powershell scripts that operates over one specified mailbox
The problem I have when I run this way is that in the jobs I have the error:
The term 'Add-MailboxPermission' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Although other commands like Get-EXOMailbox are working correctly.
Looking for help I found that the problem can be related with the session, so I changed my code to:
Connect-ExchangeOnline -Credentials ...
Start-RSJob -ModulesToImport ExchangeOnlineManagement -Throttle $ProcesosConcurrentes -InputObject $jobs -ScriptBlock {
$o365session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://outlook.office365.com/powershell-liveid" -Credential $(Import-Clixml $Using:ExchangeCredentials) -Authentication "Basic" -AllowRedirection
Import-PSSession $o365Session -CommandName #('Add-MailboxPermission', 'Get-MailboxPermission')
./migra_buzon.ps1 ...
}
In this case, the problem I have is with the Exchange connection. After running a few jobs I'm getting the error:
[outlook.office365.com] Processing data from remote server outlook.office365.com failed with the following error message: Client did not get proper response from server. For more information, see the about_Remote_Troubleshooting Help topic.
Cannot validate argument on parameter 'Session'. The argument is null. Provide a valid value for the argument, and then try running the command again.
So my question is, what is the right way to run background jobs sharing the connection got in the main process?
Thanks
PS: I first tried to run jobs with Start-Job, but with this the problem is that each background job needs its own connection, so I got and maximum number of connections exceeded. And this is the reason I changed my code to Start-RSJob
It appears that you are hitting Exchange Online throttling limits.
If that indeed the case, you can try the following method.
How to relax PowerShell throttling
There is a relatively new customer facing way to increase or update PowerShell Throttling Policies.
Go to Microsoft 365 admin center.
Validate that you are logged in with the user that has the correct role assignment.
Click on the Need Help? Widget in the bottom right corner
Graphical user interface
Type Exchange PowerShell throttling in the search box and select “Temporarily update throttling policies for a migration”. Keep in mind that this is only applicable for 90 days. After 90 days, the throttles will return to back to the default values for that tenant.
MachSol offers Tenant management using a job engine, that allows you to do multiple operations using front-end and let the jobs handler take care of processing in background. You can give it a try:
https://www.machsol.com/machpanel-automation-for-microsoft-CSP-partners/

Powershell JEA Security hole with commands embedded in functions?

This seems bonkers so I'm hoping I didn't find a big security gap... I have Powershell JEA (just enough administration) successfully set up on a server to allow only certain administrative functions. Specifically, I don't have the "net" command allowed at all. If I do the below:
Invoke-Command -computername MYSERVER -configurationname MYCONFIG -scriptblock {
net stop "My windows service"
}
Then I get the error below as expected:
The term 'net.exe' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
+ CategoryInfo : ObjectNotFound: (net.exe:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
BUT, if I wrap my "net.exe" usage inside a function, it actually works:
Invoke-Command -computername MYSERVER -configurationname MYCONFIG -scriptblock {
function StopService($servicename) {
net stop "$($servicename)"
}
StopService "My windows service"
}
The above does not throw an error and actually stops the service. WTF?
This is more than the "net" command. Another example: "Out-File" is not allowed. The below code fails:
Invoke-Command -computername MYSERVER -configurationname MYCONFIG -scriptblock {
"hacked you" | Out-File C:\test.txt
}
With the error:
The term 'Out-File' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
+ CategoryInfo : ObjectNotFound: (Out-File:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
+ PSComputerName : vxcazdev01
But if I do it this way, it works:
Invoke-Command -computername MYSERVER -configurationname MYCONFIG -scriptblock {
function DoIt() {
"hacked you" | Out-File C:\test.txt
}
DoIt
}
Why is this happening? Am I missing something? The JEA project on github is now read-only so I can't open an issue there.
Edit to add: the same problem happens if I use Enter-PSSession instead of Invoke-Command.
Edit to add relevant session config pieces: My session config file only has a few customizations from the default file produced by the New-PSSessionConfigurationFile command:
SessionType = 'Default'
RunAsVirtualAccount = $true
RoleDefinitions = #{
'MYDOMAIN\MYADGROUP' = #{ RoleCapabilities = 'MyCustomRole' }
}
MYADGROUP is the only group my test user is a member of. And then this is registered on the server like so:
Register-PSSessionConfiguration -Path "C:\Program Files\WindowsPowerShell\Modules\MyJEAModule\VSM.pssc" -Name 'MYCONFIG' -Force
I'm just going to answer this question myself with the answer: security hole by design.
The documentation says this:
The body (script block) of custom functions runs in the default language mode for the system and isn't subject to JEA's language constraints.
It's rather surprising, to me at least, that JEA will let you lock down actions on a server in security sandbox, but as soon as one writes their own custom functions they have full administrative rights to the machine and have broken out of the box. Allowing or restricting the creation of custom functions via the language mode is one thing, but bypassing the set security permissions is another. In my opinion, user-written custom functions should be subject to full security limitations; custom functions written in the role capabilities files should have full admin rights as the documentation indicates.
The other answer by HAL9256 is great, but it describes the benefits of JEA, which is not the topic of this post.
I wouldn't say that it's a security hole, it's that you are clearly demonstrating what could happen on a system when you have not set up a fully secured configuration. Microsoft even states JEA doesn't protect against admins because "they could simply RDP in and change the configuration". We need the correct combination of SessionType and RoleDefinitions, and that they are meant for two different configurations.
Your example demonstrates a configuration setup where, even though we lock the front door of the house, we started off with a house that had all the windows and doors open. It is fully possible to get in through the back door, or reach through a window and unlock the front door, thus demonstrating the fruitlessness of locking the front door. For example, I don't need to run net stop I could just do a taskkill instead, or..., or..., etc.
Let's look at the overview of what JEA is designed for:
Reduce the number of administrators on your machines using virtual accounts or group-managed service accounts to perform privileged actions on behalf of regular users.
Limit what users can do by specifying which cmdlets, functions, and external commands they can run.
Better understand what your users are doing with transcripts and logs that show you exactly which commands a user executed during their session.
Reduce the number of administrators
We can use JEA to remove people from the local administrators group, or larger Domain Admin groups. They can then selectively get elevated Administrator rights when needed through Virtual Accounts.
If we set it up with the SessionType = 'Default' this enables all language features. We essentially can have a Jr. Technical Analyst without Domain Admin rights, without Local Admin rights, log on, and do Administrative duties. This is what the session type is meant for.
Limit what users can do
If we set it up with the SessionType = 'Default' this enables all language features. In mode it doesn't matter what commands we limit, all the doors and windows are open and we can pretty much do whatever we want. One rule in Windows is that there is always 3-4 different ways to do something. You just can't plug all the holes when everything is wide open.
#MathiasR.Jessen is right, the only way to Limit what users can do is to first lock down the system. Setting SessionType = 'RestrictedRemoteServer' locks down the session to:
Sessions of this type operate in NoLanguage mode and only have access
to the following default commands (and aliases):
Clear-Host (cls, clear)
Exit-PSSession (exsn, exit)
Get-Command (gcm)
Get-FormatData
Get-Help
Measure-Object (measure)
Out-Default
Select-Object (select)
No PowerShell providers are available, nor are any external programs
(executables or scripts).
This starts us out with a completely locked up house. We then selectively enable the needed commands. Ideally we should pre-create custom functions so that the custom function is the only thing they can run, they are technically not even allowed to execute the commands inside the function at all, it's all handled by the Virtual Account.
What you did was essentially exploiting this custom function capability, by "cheating" and creating our own "custom function" that will run in the Virtual Account scope, and not your own, which is why it was able to run "non-allowed" functions, and you were not. If the SessionType = 'RestrictedRemoteServer', you wouldn't be able to create scripts or custom functions like demonstrated, and hence, the "hole" would not be there.
Better understand what your users are doing
Finally the other benefit for JEA is that it can record a transcript of all the commands that are run. This might be needed for audit reasons or fed into a SIEM solution or to find out how your Jr. Technical Analyst messed up your system ;-).

Drive Mapping with Azure Scale Sets using Desired State Configuration

I am running into an interesting issue. Maybe you fine folks can help me understand what's happening here. If there's a better method, I'm all ears.
I am running a DSC Configuration on Azure and would like to map a drive. I've read this really isn't what DSC is for, but I am not aware of any other way of doing this outside of DSC with Azure Scalesets. Here's the portion of the script I am running into issues:
Script MappedDrive
{
SetScript =
{
$pass = "passwordhere" | ConvertTo-SecureString -AsPlainText -force
$user = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "username",$pass
New-PSDrive -Name W -PSProvider FileSystem -root \\azurestorage.file.core.windows.net\storage -Credential $user -Persist
}
TestScript =
{
Test-Path -path "W:"
}
GetScript =
{
$hashresults = #{}
$hashresults['Exists'] = test-path W:
}
}
I've also attempted this code in the SetScript section:
(New-Object -ComObject WScript.Network).MapNetworkDrive('W:','\\azurestorage.file.core.windows.net\storage',$true,'username','passwordhere')
I've also tried a simple net use command to map the drive instead of the fancy, New-Object or New-PSDrive cmdlets. Same behavior.
If I run these commands (New-Object/Net Use/New-PSDrive) manually, the machine will map the drive if I run it with a separate drive letter. Somehow, the drive is attempting to be mapped but isn't mapping.
Troubleshooting I've done:
There is no domain in my environment. I am simply attempting to create a scale set and run DSC to configure the machine using the storage account credentials granted upon creation of the storage account.
I am using the username and password that is given to me by the Storage Account user id and access key (randomly generated key, with usually the name of the storage account as the user).
Azure throws no errors on running the DSC module (No errors in Event Log, Information Only - Resource execution sequence properly lists all of my sequences in the DSC file.)
When I log into the machine and check to see if the drive is mapped, I run into a disconnected network drive on the drive letter I want (W:).
If I open Powershell, I receive an error: "Attempting to perform the InitializeDefaultDrives operation on the 'FileSystem' provider failed."
If I run "Get-PSDrive" the W: drive does not appear.
If I run the SetScript code manually inside a Powershell Console, the mapped drive works fine under a different drive letter.
If I try to disconnect the W: drive, I receive "This network connection does not exist."
I thought maybe DSC needed some time before mapping and added a Sleep Timer, but that didn't work. Same behavior.
I had a similar problem before, while it didn't involve DSC, mounting an Azure File share would be fine until the server would be restarted, then it would appear as a disconnected drive. This happend if i used New-Object/Net Use/New-PSDrive with the persist option.
The answer to that issue, i found in the updated docs
Persist your storage account credentials for the virtual machine
Before mounting to the file share, first persist your storage account
credentials on the virtual machine. This step allows Windows to
automatically reconnect to the file share when the virtual machine
reboots. To persist your account credentials, run the cmdkey command
from the PowerShell window on the virtual machine. Replace
with the name of your storage account, and
with your storage account key.
cmdkey /add:<storage-account-name>.file.core.windows.net /user:<storage-account-name> /pass:<storage-account-key>
Windows will now reconnect to your file share when the virtual machine
reboots. You can verify that the share has been reconnected by running
the net use command from a PowerShell window.
Note that credentials are persisted only in the context in which
cmdkey runs. If you are developing an application that runs as a
service, you will need to persist your credentials in that context as
well.
Mount the file share using the persisted credentials
Once you have a remote connection to the virtual machine, you can run
the net use command to mount the file share, using the following
syntax. Replace with the name of your storage
account, and with the name of your File storage share.
net use <drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name>
example :
net use z: \\samples.file.core.windows.net\logs
Since you persisted your storage account credentials in the previous
step, you do not need to provide them with the net use command. If you
have not already persisted your credentials, then include them as a
parameter passed to the net use command, as shown in the following
example.
Edit:
I don't have an Azure VM free to test it on, but this works fine on a Server 2016 hyper-v vm
Script MapAzureShare
{
GetScript =
{
}
TestScript =
{
Test-Path W:
}
SetScript =
{
Invoke-Expression -Command "cmdkey /add:somestorage.file.core.windows.net /user:somestorage /pass:somekey"
Invoke-Expression -Command "net use W: \\somestorage.file.core.windows.net\someshare"
}
PsDscRunAsCredential = $credential
}
In my brief testing the drive would only appear after the server was rebooted.
What I imagine is happening here:
DSC runs under the NT AUTHORITY\SYSTEM account and unless the Credential attribute has been set, the Computer account is used when pulling the files from a network share. But looking at how Azure Files operate, permissions shouldn't be an issue, but running this whole process under NT AUTHORITY\SYSTEM could. I suggest you try to run DSC as a user of your VM's and see if that works.
ps. You could also try to perform the same operation against a VM with network share where you are confident that share\ntfs permissions are correct. You might need to enable anonymous user to access your share for that to work.

Get-WinEvent via Powershell remoting

I have a non-admin access to a server. I'm allowed to connect via RDP, and to use PowerShell remoting. When I invoke the following PowerShell command from an RDP session:
Get-WinEvent -MaxEvents 100 -Provider Microsoft-Windows-TaskScheduler
I get 100 records, as expected.
When I do the same via PowerShell remoting, by invoking the following from my local machine:
invoke-command -ComputerName myserver {Get-WinEvent -MaxEvents 100 -Provider Microsoft-Windows-TaskScheduler }
I get an error:
No events were found that match the specified selection criteria.
CategoryInfo : ObjectNotFound: (:) [Get-WinEvent], Exception
FullyQualifiedErrorId : NoMatchingEventsFound,Microsoft.PowerShell.Commands.GetWinEventCommand
Any idea why? The remote PowerShell session should be running under identical credentials, right?
EDIT: whoami does show a difference in the security context between RDP logon and PowerShell remoting - the group set is different. In the RDP logon session, there are the following groups in the token:
BUILTIN\Remote Desktop Users
NT AUTHORITY\REMOTE INTERACTIVE LOGON
while in the remoted one, there's
CONSOLE LOGON
That could account for the discrepancy in rights...
EDIT: from the registry, it looks like the task scheduler log somehow is a part of the System log. According to MS KB article Q323076, the security descriptor for the System log can be found under HKLM\SYSTEM\CurrentControlSet\Services\EventLog\System, value CustomSD. I can't check the server in question, but on another server where I'm an admin, there's no CustomSD under that key. Under HKLM\SYSTEM\CurrentControlSet\Services\EventLog\System\Microsoft-Windows-TaskScheduler, neither. Only the Security log gets a CustomSD. The next question is, where's the default SD?
Permissions on the actual log file at C:\Windows\System32\winevt\LogsMicrosoft-Windows-TaskScheduler%4Operational.evtx are irrelevant, the access is being mediated by the EventLog service anyway.
If you are not an administrator on the remote computer, and invoke-command -ComputerName myserver {whoami /all} tells you are who you expected to be.
You will need to be part of Event Log Reader group on the remote computer.
As well as Remote Management Users group, which I believe you already are.
If you need to read security logs, you will also need Manage auditing and security log under Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment
According to Default ACLs on Windows Event Logs # MSDN blog, in Windows Server 2003+, the default ACL for the System log goes:
O:BAG:SYD:
*(D;;0xf0007;;;AN) // (Deny) Anonymous:All Access
*(D;;0xf0007;;;BG) // (Deny) Guests:All Access
(A;;0xf0007;;;SY) // LocalSystem:Full
(A;;0x7;;;BA) // Administrators:Read,Write,Clear
(A;;0x5;;;SO) // Server Operators:Read,Clear
(A;;0x1;;;IU) // INTERACTIVE LOGON:Read <===================
(A;;0x1;;;SU) // SERVICES LOGON:Read
(A;;0x1;;;S-1-5-3) // BATCH LOGON:Read
(A;;0x2;;;LS) // LocalService:Write
(A;;0x2;;;NS) // NetworkService:Write
Does NT AUTHORITY\INTERACTIVE LOGON include RDP logon? I've found a forum message that says so, but I'd better find a doc to that effect...
The article claims this ACE comes "straight from the source code". So it's hard-coded in the service, with a chance to change via the registry.
You need local admin rights to open a powershell session.
But there is a workaround/alterative here:
https://4sysops.com/archives/powershell-remoting-without-administrator-rights/
I had the weirdest variation of this problem, was driving me nuts !
Remoting from a server W2008r2 (logged on as domain admin, inside interactive powershell session) to workstation Win7 to get logon/logoff events :
invoke-command -computername $pc {Get-WinEvent -FilterHashtable #{logname='
Security';Id=#(4624,4634)}}
-> No events were found that match the specified selection criteria.
But it does work when outputting an empty string in the scriptblock before the Get-Winevent :
invoke-command -computername $pc {"";Get-WinEvent -FilterHashtable #{lognam
e='Security';Id=#(4624,4634)}}
TimeCreated ProviderName Id Message PSComputerName
----------- ------------ -- ------- --------------
19/03/2018 11:51:41 Microsoft-Windows-Se... 4624 An account was succe... b25_x64
19/03/2018 11:51:41 Microsoft-Windows-Se... 4624 An account was succe... b25_x64
Stumbled upon this fix after trying everything: Enter-Pssession, New-Pssession, using -credential parameter to pass a predefined credential to invoke-command, to get-winevent, to both. Nothing worked, gave "No events..." in every combination.
Then I inserted a $cred inside the scriptblock to show the passed on credential for debugging, and suddenly I got the events I was looking for...

New-MailboxExportRequest don't work in remote PSsession

i often use the New-MailboxExportRequest 's command on an exchange server in powershell console, like this one :
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010;
New-MailboxExportRequest -Mailbox jadrego –filepath \\computer1\c$\test.pst -verbose
it works correctly. But if I run those commands in PS remote session like this one :
I use the same User (Domain Admin, Exchange Admin)
Invoke-Command -ComputerName vdiv03 -ScriptBlock {
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010;
New-MailboxExportRequest -Mailbox jadrego –filepath \\computer1\c$\test.pst
}
I obtain this error :
failed to comunicate with mailbox database
with -verbose
Loading the snapin like that isn't supported in Exchange 2010.
IMHO, you'd be much better off just leveraging the native remoting built into Exchange for management tasks.
$ExchangeServer = <exchange serer name>
$SessionParams =
#{
ConfigurationName = 'Microsoft.Exchange'
ConnectionURI = "http://$ExchangeServer/powershell/"
Authentication = 'Kerberos'
# Credential = $Creds
}
$Session = New-PSSession #SessionParams
Invoke-command -ScriptBlock {New-MailboxExportRequest -Mailbox jadrego –filepath \\computer1\c$\test.pst} -Session $Session
Remove-PSSession $Session
Set $ExchangeServer to the name of one of your Exchange 2010 servers. The account will need to be a member of the necessary RBAC role for the function you're performing, and you can uncomment the Credential parameter and provide alternate credentials for the session if you need to.
This will also elimnatat the need to have the management tools installed on the computer that's running the script, and the associated headaches of keeping it patched to the same level as what's on the servers.
If you're working interactively, or running a script that uses many Exchange cmdlets you can add the session creation to your profile, and do an Import-PSSession and you'll have proxy functions for the Exchange cmdlets available locally that you can use the same as if you'd loaded the snapin.
Import-PSSsession $Session
Some caveates to be aware of:
When you use implicit remoting like this, the account of the credentiaals used to establish the session will determine what capablilities you will have. What appear to be Exchange cmdlets added to the local session are actually proxy functions ( you can verify this using Get-Command). This set of proxy functions is created dynamically by Exchange when you initially establish the session and will be customized according to the RBAC roles the account making the connection belongs to. If it doesn't have permissions to perform given functions you will not get the proxy functions for those cmdlets, or functions may not have parameters for those functions.
The results you get back will not be the same as the same as the native objects returned if you used an EMS shell, or loaded the snapin. They will be deserialized objects, which means they may be missing methods and will lose some fidelity compared to the native objects. There will be very few instances where this will be an issue, or cannot by worked around.
Also be aware that when you use implicit remoting, updates are made under the authority of an Exchange system account, not your credentials. When you use the snapin, your account must have permission to update the Exchange properties stored in AD directly, and those changes will be logged in Windows audit logs (if enabled) as having been made by that account. When you use implicit remoting they will be recorded as being done by the Exchange service account. Exchange will record the details of the actual user account that made the request in it's admin audit log, and you can use Search-AdminAuditLog to find out when changes were made, and by who even if Windows audit logging is not enabled. If you use the snapin directly and do not have AD audit logging enabled you will lose that audit trail.