Inconsistent query on remote mapped drives - powershell

I used below Powershell script to check the mapped drives on remote PCs.
Some PC gives the desired result but some does not.
Get-WmiObject Win32_MappedLogicalDisk -computer HW059 | select name, providername
The account I am using got the same administrator right on all the PCs so I don't think the issue is due to the user privilege.
I am wondering if there is any services need to be started / relevant to the script?
I checked that WMI service is running on all the PCs.
Sorry that I am new to scripting.
Would someone please help?
Powershell result

Mapped drives are feature of user session, they do not exist by default on a system. Even if all users would have same disk mapped (i.e. S: mapped to \\server\share), S: would not exist there if there is no logged in user.
Please see remarks from MSDN (you are user A in description below)
The instances returned for this class are as follows, supposing that user A is enumerating the instances:
The provider looks for a logon session of user A on that machine: If there is one (and only one) such logon session, then the provider returns the mapped drives of that session. If there is more than one session for user A on the machine, then no mapped drive instances are returned (because the provider has no reasonable way of deciding which session to use).
If there are no sessions of user A running, and there is a locally logged on user B:
If there is a single session for user B, then the provider impersonates A and returns the mapped drives of user B. This case supports the scenario of Helpdesk wanting to see the instances of a locally logged on user. However, whether instances are returned depends on the Local Security Policy settings in the Control Panel Administrative Tools. If the following policy is set to "Object Creator", then no mapped drive instances are returned, even if A is a member of the Administrators group: "System object: default owner for objects created by members of the administrators group." Again, if there is more than one session of user B running on the machine, then the provider has no way of deciding which to use. In this case, no mapped drive instances are returned.

Related

Get-NetFirewallRule User{GIUD}

We are having a problem with our Windows 10 computers either not adding all of the Firewall rules from GPOs when the computer restarts or somewhere along the line.
We have multiple users that log onto the consoles (usually with a roaming profile) and a small percentage of them throw a firewall exception when trying to open necessary apps that should have been allowed through GPOs.
My questions are:
1.) Why is this happening?
2.) How to get information about the "USER" GUID that is returned from Get-NetFirewallRule?
Get-NetFirewallRule -Action Block
One partial result is:
TCP Query User{E2507D53-3CCE-4791-8BBF-9830003E90C5}
So how do i get information about this guid (E2507D53-3CCE-4791-8BBF-9830003E90C5)?
3.) Also, some of the computers that have this issue also block PSRemoting so I cannot fix this issue remotely, which is just as bad as the other issue!
Any ideas?
Thank you
PS: I have searched high and low for info about that GUID. It has become a personal goal to resolve the guid to an object name at this point.
So what is happening is when Windows prompts you for a application to create a firewall exception (even if you hit cancel), 2 rules are created by windows.
TCP Query User and UDP Query User
The rules are stored in the registry under the path HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\FirewallRules
You might have this prompt with many applications. Which would mean the same name would be created over and over again. The GUID is just so that there is a unique name per application attempt.

Shadowing RDP session automatically

I want to create a powershell script. One of our supporting companies needs to connect to our server from time to time to make their work and I need to connect their RDP session to watch them if their doing is OK.
Server: MS Server 2012 R2 x64
The case is I want to create a script which;
1- Checks the server's current sessions
2- Finds the specific session ID come from the username (the which I will give them to connect) currently logged in as RDP
3- When I pull the correct session ID from the username (and that means he is currently online and connected), I want to shadow his/her session without prompting/requesting their approvals.
Yes, I can do these separately but looking a powershell script or something like that to these. In one attempt, I want to shadow rdp to correct session and if he/she is not online, I want the system return an information message to me that the username is not currently online.
Is it possible?
Thanks&Regards
Melih
If memory serves PoSh does not natively supports querying RDP sessions (unless using the RDS broker, but I could be wrong here) but you can easily do that via query session
I cannot test it right now but something like this should do the trick:
C:\>query session
SESSIONNAME USERNAME ID STATE TYPE DEVICE
console Administrator1 0 active wdcon
rdp-tcp#1 User1 1 active wdtshare
rdp-tcp 2 listen wdtshare
You can even call it with the username directly if that is known to you.
Shadowing the session would be easily accomplished with something like
mstsc /v:"$srv" /shadow:"$id" /control /noconsentprompt
I did not test this so maybe it needs some tweaking but a possible starting point could be
$userSessions = query session user01 /SERVER:server01 | Select-Object -skip 1 | ForEach-Object{$_.Split(' ',[System.StringSplitOptions]::RemoveEmptyEntries)}
$sessionId = $userSessions[2]
mstsc /v:"$srv" /shadow:"$sessionId" /control /noconsentprompt
Of course if running from the local server you can omit the /SERVER:XXX argument.
Hope this can help getting you started.

How to use Powershell to script a domain user's temporary file location

I am writing deployment scripts using Powershell to install Scheduled Tasks, Windows Services and IIS App pools.
Each of these items will be run under the identity of an Active Directory domain user. My issue is that the business rules enforced on the servers state that no process or user can write to the C drive.
Therefore i need to direct each installed object to use the E drive for temporary storage of any kind.
How can i assign the temp directory environment variable of a domain user using powershell on a server that will have no 'knowledge' of that domain user until i instantiate the installed objects?
When it comes to the IIS app pools I have found a (hacky) solution that could potentially work in this:
https://serverfault.com/questions/711470/applicationpoolidentity-environment-variables-iis
that requires me to set the app pool to run as the profile, fire up the pool, snoop registry keys, obtain a SID, and then modify registry keys to set the environment variable for the temp drive.
Is there an easier way? And how could i do this for services and scheduled tasks?
Pie in the sky - i write one powershell script to modify the temp env parameter for this one domain account before installing any of these objects and then when they are installed it "just works".
Any suggestions?

Modify Active Directory Client OU from Client Machine

I am trying to create a powershell startup script for my domain controlled computers that will place the computer into the the specified OU. I would like for the variables to be taken on the local computer and then passed to the remote server. Once there I would like to execute the last two lines on the server.
The script below does work if it is ran on the server however as stated above I would like to be able to execute this from a client machine. How can I make this happen?
$computername = $env:ComputerName
$new_ou = "OU=TestOU,DC=Test,DC=Controller,DC=com"
Import-Module ActiveDirectory
Get-ADComputer $computername | Move-ADObject -TargetPath $new_ou
Note: Before anyone asks...my goal is to have the OU be determined by the client IP address. I understand that there are scripts that will do the discribed above but they run strictly on the server and query the DNS. I would rather have this run as a startup script on the local computer so I an better control which computers are being moved. At this point I am not interested in tackling this issue. Only the issue of how to execute the above lines on a local machine.
I assume you want to run the last 2 lines on the server because you expect that most of your domain computers won't have the RSAT tools or AD cmdlets installed.
The way to run it on a server is to have PowerShell Remoting enabled on the server and then use Invoke-Command.
That authentication is typically done with kerberos, though you could change the method, and you can supply credentials manually (though I doubt you want to be embedding credentials in the script).
You need to consider that the user making the AD changes needs permission to do so. Usually that's a domain admin, although permission could be delegated.
If you're running this as a startup script, it's running as SYSTEM. That account authenticates on the domain as the computer account (COMPUTERNAME$). This means that the computer account needs permission to move itself, which may mean it needs the ability to write objects into all possible OUs (I don't recall offhand which permissions are needed).
So you would either need to grant this ability to all computers (any computer in Domain Computers would have the ability to move any other computer to any OU), or somehow give each computer only the ability to move itself into the correct OU (which might still be too much in the way of permissions).
Another option is to make a customized session configuration on the server with a RunAs user. You could limit the users allowed to connect to the session (to Domain Computers), and limit the allowed commands so that the connecting computers can only run a limited set of functions/cmdlets. Even better, you can write your own function to do the change and only let them run that one. With the RunAs user being a privileged user in AD, the changes will work without the connecting user having the ability to make the changes directly, and without giving the connecting user the ability to use the privileged user or elevate their own permission. Remember that the connecting user in this case is the computer account.
This last method is, I think, the best/most secure way to do what you want, if you insist that it must be initiated from the client machine.
Reconsider doing this as a server-side process. Get-ADComputer can return an IPv4 address for the object, so you could use that instead of DNS. Centralizing it would make it easier to manage and troubleshoot the process.

Accessing files over the network through a script running as NT Authority\System

I'm not sure if I am asking this in the right spot or not, sorry if I am wrong.
I would like to know please, SCCM is currently operational in our school, and we use it to install software across our network.
I have a piece of software that requires a different channel for each room or staff laptop that it is installed in.
I have managed to set up a powershell script that polls a csv for the channel that should be assigned to each room, and when the script it run, it pulls that channel and installs the software with that channel assigned.
What I am having trouble with now, is that SCCM installs the software using the local system account, and the csv is located on a network share.
When the System account goes to poll the csv file it gets an access denied error, even though System has full control of the csv and directory that the csv is located in.
Is it just me not understanding the permissions that System has, or can System not interact with other devices over the network, I assumed that being system on both devices, it would be able to cross to another device and impersonate system on that device.
Is there a way around this?
Thanks for any feedback.
The system account uses the machine account when accessing the network e.g. COMPNAME$, if you're on AD you can add a grant to that computer account to the file share ACL. If you don't have a domain you can create a local account with matching username and password on both machines and configure the service to run as that account.
By simply adding Domain Computers to the files permissions list and assigning it Read/Write permissions, I am able to let any computer in this group (all computers on the domain) access the specific files.
This is also what Andy Arismendi was saying, however just an already setup group.