TLDR: How do I automate uninstall of all drivers in two categories without needing to know the OEM number beforehand?
First things first - I'm as far from an expert as they come. I'm an L1 support desk grunt messing with powershell to try automate the tedious parts of my job. A persistent issue we've got with 90% of our machines requires uninstalling all drivers for audio devices, and because I'm too lazy to do this in a remote session, I'm trying to automate it through a script that fires off a bunch of commands through psexec to a specified hostname.
Downside is driver name is not always going to be the same on each machine, and the OEM number for the drivers isn't consistent across multiple machines either. This doesn't matter when you're doing it through device manager - just need to uninstall everything in the Audio I/O and Sound Controllers dropdown - but I've no idea how to specify this in command line.
I'm sure it's possible. I've been poking around at pnputil and Get-WindowsDriver and there's gotta be some way to do it. Might be something with wmic that could work, but I'm not familiar enough with that command. I could just do it manually, but then I'd have to spend five minutes in a laggy remote session making small talk with a user, and I can't stand small talk.
So essentially my question is: Is there a way to query OEM info of every driver in a specific category, and then pipe that info into a cmdlet that'll uninstall them?
Related
I am using PowerShell to manage Autodesk installs, many of which depend on .NET, and some of which install services, which they then try to start, and if the required .NET isn't available that install stalls with a dialog that requires user action, despite the fact that the install was run silently. Because Autodesk are morons.
That said, I CAN install .NET 4.8 with PowerShell, but because PowerShell is dependent on .NET, that will complete with exit code 3010, Reboot Required.
So that leaves me with the option of either managing .NET separately, or triggering that reboot and continuing the Autodesk installs in a state that will actually succeed.
The former has always been a viable option in office environments, where I can use Group Policy or SCCM or the like, then use my tool for the Autodesk stuff that is not well handled by other approaches. But that falls apart when you need to support the Work From Home scenario, which is becoming a major part of AEC practice. Not to mention the fact that many/most even large AEC firms don't have internal GP or SCCM expertise, and more and more firm management is choosing to outsource IT support, all to often to low cost glorified help desk outfits with even less GP/SCCM knowledge. So, I am looking for a solution that fits these criteria.
1: Needs to be secure.
2: Needs to support access to network resources where the install assets are located, which have limited permissions and thus require credentials to access.
3: Needs to support remote initiation of some sort, PowerShell remote jobs, PowerShell remoting to create a scheduled task, etc.
I know you can trigger a script to run at boot in System context, but my understanding is that because system context isn't an actual user you don't have access to network resources in that case. And that would only really be viable if I could easily change the logon screen to make VERY clear to users that installs are underway and to not logon until they are complete and the logon screen is back to normal. Which I think is really not easily doable because Microsoft makes it near impossible to make temporary changes/messaging on the logon screen.
I also know I can do a one time request for credentials on the machine, and save those credentials as a secure file. From then on I can access those credentials so long as I am logged in as the same user. But that then suggests rebooting with automatic logon as a specific user. And so far as I can tell, doing that requires a clear text password in the registry. Once I have credentials as a secure file, is there any way to trigger a reboot and one time automatic logon using those secure credentials? Or is any automatic reboot and logon always a less than secure option?
EDIT: I did just find this that seems to suggest a way to use HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon without using a plain text DefaultPassword. The challenge is figuring out how to do this in PowerShell when you don't know C#. Hopefully someone can verify this is a viable approach before I invest too much time in trying to implement it for testing. :)
And, on a related note, everything I have read about remote PowerShell jobs and the Second Hop Problem suggests the only "real" solution is to use CredSSP, which is itself innately insecure. But it is also a lot of old information, predating Windows 10 for the most part, and I wonder if that is STILL true? Or perhaps was never true, since none of the authors claiming CredSSP to be insecure explained in detail WHY it was insecure, which is to me a red flag that maybe someone is just complaining to get views.
I am a network admin with very little experience coding or using Powershell. About once a month I have to check for and install Windows updates on about 25 servers. I've played around with Powershell in hopes of handling this task in a more automated fashion but get hung up getting the servers to actually install the updates after checking. I apologize for posting such a noob question, but can anyone let me know if this is possible and if so, show me the ways of your dark arts?
WSUS will require you to install the components and setup the profiles etc. If you have a large number of servers on a single network, that is your best bet for delivering the content.
If you just want to be able to schedule and run the updates on specific remote hosts, there is a ton of stuff already available that will do this and you just need to come up with your implementation of scheduling the updates for what hosts. I did this exact thing for a prior employer for 10k plus servers world wide using a web app for the owners to schedule the updates and then back end workflow to perform the approval requests, installs, logging, etc..
PowerShell Gallery is a good start. Here is a post that walks you through using PSWindowsUpdate.
While learning and looking through PowerShell commands, I came across the add-history command. It is not clear to me what purpose this command could serve (having already looked through the documentation on the command on microsoft docs), the only plausible reason being that one would want to fabricate history in PowerShell in order to cover their tracks (as the user/a script may have been performing some shady business.) Could someone please give a use-case where the add-history command could be used?
I know you mentioned reading the docs, but they give several plausible reasons and examples there already (emphasis mine):
You can use this cmdlet to add specific commands to the history or to create a single history file that includes commands from more than one session.
Suppose you have a cluster of machines that are configured identically but you need to run the same commands on them frequently.
You could have a system of sharing history, possibly cherry-picked history, that contains frequently used commands.
Or maybe running a command on any one system is already sufficient to act on all of them, and on any one server you want to be able to see the history of what has been run all the way through (this would show you a single timeline basically of everything run that took effect even if it was executed on another host).
Are there better ways to achieve these things? Yeah almost certainly, but that's little reason not to give the options for someone to manage their history (to say nothing of add-ons, plugins, modules that make the prompt better, etc.).
For example posh-git, as a consequence of updating the prompt to show git status must run git commands in the background. Maybe it needs to manipulate history so that those commands don't clog up your command history (no idea if it actually or needs to do this, that might not be the case).
In any case, history is not a security feature. Manipulating it is not a concern.
I’m trying to profile a website I have to work on in IIS, in Perl. The website uses Catalyst. I’m using Devel::NYTProf to profile it.
By default, the profile file is written in ./nytprof.out. I don’t have access to the command line used to launch the perl, nor to pass arguments (I use use Devel::NYTProf to enable profiling in my perl file).
But I can’t find the file… Do you have an idea where it’d be? How could I profile my website with NYTProf a better way?
I assume you mean IIS.
Have you checked the user the web server is running as has write permission to likely folders? It use to run as IANONUSR (IIRC) or similar, which had very controlled permissions for obvious reasons.
The IIS FastCGI module lets you set environment variables for the FastCGI processes, which should let you set out_file for NYTPROF. If all else fails you can hack Run.pm in NYTPROF and change the location that way, crude but at least you know where it is trying to write to.
I salute your efforts, I would probably just port the application to run under Linux. First time getting NYTProf working under Linux was hard enough, especially as the processes have to terminate normally, so the FastCGI processes got a method added to make them die when I fetched a specific URL, which I'd keep fetching till all the processes were dead.
That said NYTProf was well worth the effort on Linux, was able to track down a regular expression that was eating vast amounts of CPU, and didn't even need to be called 99.9% of the time it was. Experience on Windows was "fork" was a performance killer, but I think Microsoft fixed that somewhat since my IIS days.
I want my Perl script can handle a large number of users.
I'm going to run the script on Amazon clouds servers.
This is my understanding of how the clouds work.
At first the script instances are run on a single server.
Then at the moment the server gets overloaded by too many users, the second server is added to run script instances.
Do I understand clouds right?
Do I have to do anything special to make this process work?
Or maybe everything is run seamlessly and the only thing I have to do is to upload the script to the image?
That is a bit too narrow of a definition for cloud computing but probably close enough for the purposes of this question. The process isn't seamless, you have to actually detect that you're running too hot for the singe machine and add another instance. You can do this from perl using the API. It does, however, take real time to spin up another instance so it makes more sense to distribute your task initially.
If your perl script is something which can cleanly run in parallel already then you don't have to make many changes. Just shove it onto a number of instances and away you go.