I'm trying to eventually create a winform powershell tool that manages disk quotas on a Windows 2016 server from a Windows 10 client workstation. Given that Every single user has at least 2 mapped drives is there any way to query a specific user and volume without get-wmiobject cycling through every single quota first? The goal is to create a tool as fast or faster than just remoting into the file server to increase quotas as requests come in but to increase a quota we need to know the quota limit first.
I also know that I could map the root of a drive to a letter using my admin creds, but we don't allow saving credentials for mapped drives because we have to follow guidelines set up by federal government (State Health Care Agency) and our security team doesn't allow for credentials to be saved in any kind of windows logon. So that being said the powers that be are wanting me to find a way around us remoting into the server with our admin credentials and increasing quotas.
We don't use FSRM on the server but instead manage quotas for each volume. That decision was made before my time.
For testing I have tried using the following:
$query = "Select * from win32_diskquota where QuotaVolume=""Win32_LogicalDisk.DeviceID='$volume'"" AND User =""Win32_Account.Domain='$domain',Name='$Username'"""
Get-WmiObject -Query $query -ComputerName server
This one I knew was going to take forever, but tried it anyway:
Get-WMIObject Win32_DiskQuota -ComputerName $computer | where {$_.deviceid -eq "Win32_LogicalDisk.DeviceID=$drive" -and $_.user -eq "Win32_Account.Domain=$Domain"+",Name=$name"}
I've also used get-ciminstance just to see if it would compile the list of quotas from the remote server any faster, but no dice.
Any suggestions, hints, or advice would be great. Thanks in advance.
Related
I need to query some WMI values using PowerShell from Windows 10 devices. The script is executed in the context of a non-admin user by some software distribution tooling.
There is a local admin account, and for the current purpose (retrieving information before wiping the system) it wouldn't be a problem to put the password in the script. As automation is a hard requirement, there is no way to deal with UAC windows or the user to enter some credentials.
Is there any way to get
$sess = New-CimSession -Credential $admincred
to work without running into Access is denied, because it isn't run in an elevated context? Can I somehow self-elevate it by just having the admin credentials?
[Edit]
The comments asked to provide more concrete information:
I want to onboard many unmanaged (i.e. no software distribution tool, no domain join) Windows 10 devices to Windows Autopilot.
The devices are not at a specific site.
The device vendor can't provide the information.
The users don't have administrative privileges
The users don't know the local admin password (I do)
Exposing the local admin password is less of a problem than the missing tech knowledge of the users (the password is considered legacy)
The firewall is preventing incoming traffic (no RDP, WinRM)
Code (Source):
$devDetail = (Get-CimInstance -CimSession $session -Namespace root/cimv2/mdm/dmmap -Class MDM_DevDetail_Ext01 -Filter "InstanceID='Ext' AND ParentID='./DevDetail'")
It is too time consuming to get the information using manual remote sessions with a tool like Teamviewer. Getting the users to download a tool from the intranet and running it would be a way to go. So I created a standalone application that builds and runs a customized PowerShell script. What won't work is getting it to run in an elevated session. I always end up with Access denied.
Can I somehow self-elevate it by just having the admin credentials?
No you cannot. UAC is designed to prevent exactly what you are trying to do. Related Q&A:
elevate without prompt - verb runas start-process
UAC Getting in the Way of EXE Install Powershell
Powershell provide credentials for RunAs
There may be many workarounds, but they all will have in common that you have to go to your machines (locally or remotely) at least once, gain administrative privileges and prepare something, e. g.:
A scheduled task that runs under your local administrator account or under SYSTEM and triggers the execution of your script
Disabling UAC (temporarily) (not recommended either way)
Installing any remote management software, services or accounts (with extra run as background job privilege)
I cannot determine from the System Center Configuration Manager SDK website https://learn.microsoft.com/en-us/sccm/develop/core/misc/system-center-configuration-manager-sdk how to initiate the removal of a computer from all asset collections it may be a part of.
My company now has the ability to build, deploy, and destroy virtual servers all within a few hours. All newly built servers automatically have the SCCM client installed. The current SCCM policy is to remove servers from asset groups that have been inactive for 22 days consecutive days. This is still needed for our legacy environment.
I have been informed from my VM team that there is already code that is automatically run after a virtual server is destroyed. I want to include code that can reach out to our SCCM server to initiate the removal of the server that was destroyed. I have never worked with this kind of approach before and have been unsuccessful on where to even start. I believe it was called API or Hooks.
I would prefer using PowerShell for coding as that is the language I know.
The WMI code to delete a record from sccm would be
$comp = [wmi]"\\<siteserver>\root\sms\site_<sitename>:sms_r_system.resourceID='<resourceid>'"
$comp.psbase.Delete()
To get the ResourceID you can query sms_r_system by name
(Get-WmiObject -computername <siteserver> -query "select resourceID from sms_r_system where name like '<computername>'" -Namespace "root\sms\s
ite_<sitename>").ResourceID
Keep in mind that this may produce more than one entry (which is the reason why we do not delete by name but by ResourceID, so it might be necessary to delete all of the records.
If you have access to the sccm cmdlets (only on a server where the sccm console is installed) it would be:
Get-CMDevice -Name <DeviceName> | Remove-CMDevice
vFriends.
I have a very specific question to make:
I have a Datacenter with 4 clusters, formed by 14 big hosts and almost 500 VMs. So many VMs showed the need to collect info from them. I have a tool that collects the info through several powershell scripts by myself that connects to the VIServer. Here is an example:
add-pssnapin VMware.VimAutomation.Core Connect-VIServer
vCenterServer.mydomain.com -wa 0 Get-Stat -Realtime -MaxSamples 1
-Stat cpu.latency.average -Entity (Get-VMHost * | Get-VM * | Where-Object {$_.PowerState –eq "PoweredOn"}) | Select-Object
Entity,MetricId,Value | format-table
This one gets the last latency average read of all VMs. There are many others.
It has always worked like a charm, and I have more than 6 months of history and a good source for new investments and managerial decision making.
Until the VCB back up tool started to use a similar way to get info to perform back ups. When my tool is running, the back up never starts. I tried to install the PowerCLI in another server and try to collect from there, but it turned out to be painfully slow to retrieve the data (yes, I disabled the certificate check too) averaged in 5 minutes, compared to the 30sec from inside the vCenter.
OBS: vRealize doesn't give me the info I need. VMTurbo does, but it's too expensive to be bought by now.
Then, I have 3 alternatives that I thought of:
Use the other server and lose 450% of the current data sampling
Ask the BackUp analyst to stop my scripts to perform the back up everytime (causing another big gap in the collected data)
Install another vCenter server in order to run my scripts OR have the back up tool connect to it.
I don't actually want a vCenter to operate in the linked mode feature. I just want another vCenter for those listed purposes, just like an additional Active Directory server in a forest.
Is that possible?
Am I missing another good alternative?
How do I configure it?
Will a Plataform Services Controller server do the trick?
Thanks,
Dave
I am trying to create a powershell startup script for my domain controlled computers that will place the computer into the the specified OU. I would like for the variables to be taken on the local computer and then passed to the remote server. Once there I would like to execute the last two lines on the server.
The script below does work if it is ran on the server however as stated above I would like to be able to execute this from a client machine. How can I make this happen?
$computername = $env:ComputerName
$new_ou = "OU=TestOU,DC=Test,DC=Controller,DC=com"
Import-Module ActiveDirectory
Get-ADComputer $computername | Move-ADObject -TargetPath $new_ou
Note: Before anyone asks...my goal is to have the OU be determined by the client IP address. I understand that there are scripts that will do the discribed above but they run strictly on the server and query the DNS. I would rather have this run as a startup script on the local computer so I an better control which computers are being moved. At this point I am not interested in tackling this issue. Only the issue of how to execute the above lines on a local machine.
I assume you want to run the last 2 lines on the server because you expect that most of your domain computers won't have the RSAT tools or AD cmdlets installed.
The way to run it on a server is to have PowerShell Remoting enabled on the server and then use Invoke-Command.
That authentication is typically done with kerberos, though you could change the method, and you can supply credentials manually (though I doubt you want to be embedding credentials in the script).
You need to consider that the user making the AD changes needs permission to do so. Usually that's a domain admin, although permission could be delegated.
If you're running this as a startup script, it's running as SYSTEM. That account authenticates on the domain as the computer account (COMPUTERNAME$). This means that the computer account needs permission to move itself, which may mean it needs the ability to write objects into all possible OUs (I don't recall offhand which permissions are needed).
So you would either need to grant this ability to all computers (any computer in Domain Computers would have the ability to move any other computer to any OU), or somehow give each computer only the ability to move itself into the correct OU (which might still be too much in the way of permissions).
Another option is to make a customized session configuration on the server with a RunAs user. You could limit the users allowed to connect to the session (to Domain Computers), and limit the allowed commands so that the connecting computers can only run a limited set of functions/cmdlets. Even better, you can write your own function to do the change and only let them run that one. With the RunAs user being a privileged user in AD, the changes will work without the connecting user having the ability to make the changes directly, and without giving the connecting user the ability to use the privileged user or elevate their own permission. Remember that the connecting user in this case is the computer account.
This last method is, I think, the best/most secure way to do what you want, if you insist that it must be initiated from the client machine.
Reconsider doing this as a server-side process. Get-ADComputer can return an IPv4 address for the object, so you could use that instead of DNS. Centralizing it would make it easier to manage and troubleshoot the process.
In our environment we deploy some applications with a Powershell wrapper in rare cases. For logging purposes, we want the script to retrieve properties from SCCM during installation. The properties in question are name, version and vendor. Doing some research, I figured out that I get a instance of the CCM_Application from the SCCM 2012 Client SDK:
Get-WmiObject -Namespace "Root\CCM\Clientsdk" -Class "CCM_Application" | Where {$_.EvaluationState -eq 12}
By looking for the EvaluationState value 12, I find the application in Software Center that is being currently installed. This works great for applications deployed to devices. However; when running it with applications deployed to users, it doesn't return anything. Doing some research, I discovered that CCM_Application is user centric, and the privileged service account running the script doesn't have a instance of the application.
Is there a way of making the above code work with applications deployed to users? Also; is there a better way of somehow retrieving properties from ccmexec during execution?
I know this is a very old post. The only way around this I know would be to create a scheduled task via group policy that runs as the logged in user. Trigger that, have it write it to a file, and then read it.
We were doing similar work, and we were able to get what we needed by reading appenforce.log. As crude as that is.