While learning and looking through PowerShell commands, I came across the add-history command. It is not clear to me what purpose this command could serve (having already looked through the documentation on the command on microsoft docs), the only plausible reason being that one would want to fabricate history in PowerShell in order to cover their tracks (as the user/a script may have been performing some shady business.) Could someone please give a use-case where the add-history command could be used?
I know you mentioned reading the docs, but they give several plausible reasons and examples there already (emphasis mine):
You can use this cmdlet to add specific commands to the history or to create a single history file that includes commands from more than one session.
Suppose you have a cluster of machines that are configured identically but you need to run the same commands on them frequently.
You could have a system of sharing history, possibly cherry-picked history, that contains frequently used commands.
Or maybe running a command on any one system is already sufficient to act on all of them, and on any one server you want to be able to see the history of what has been run all the way through (this would show you a single timeline basically of everything run that took effect even if it was executed on another host).
Are there better ways to achieve these things? Yeah almost certainly, but that's little reason not to give the options for someone to manage their history (to say nothing of add-ons, plugins, modules that make the prompt better, etc.).
For example posh-git, as a consequence of updating the prompt to show git status must run git commands in the background. Maybe it needs to manipulate history so that those commands don't clog up your command history (no idea if it actually or needs to do this, that might not be the case).
In any case, history is not a security feature. Manipulating it is not a concern.
Related
I am using PowerShell to manage Autodesk installs, many of which depend on .NET, and some of which install services, which they then try to start, and if the required .NET isn't available that install stalls with a dialog that requires user action, despite the fact that the install was run silently. Because Autodesk are morons.
That said, I CAN install .NET 4.8 with PowerShell, but because PowerShell is dependent on .NET, that will complete with exit code 3010, Reboot Required.
So that leaves me with the option of either managing .NET separately, or triggering that reboot and continuing the Autodesk installs in a state that will actually succeed.
The former has always been a viable option in office environments, where I can use Group Policy or SCCM or the like, then use my tool for the Autodesk stuff that is not well handled by other approaches. But that falls apart when you need to support the Work From Home scenario, which is becoming a major part of AEC practice. Not to mention the fact that many/most even large AEC firms don't have internal GP or SCCM expertise, and more and more firm management is choosing to outsource IT support, all to often to low cost glorified help desk outfits with even less GP/SCCM knowledge. So, I am looking for a solution that fits these criteria.
1: Needs to be secure.
2: Needs to support access to network resources where the install assets are located, which have limited permissions and thus require credentials to access.
3: Needs to support remote initiation of some sort, PowerShell remote jobs, PowerShell remoting to create a scheduled task, etc.
I know you can trigger a script to run at boot in System context, but my understanding is that because system context isn't an actual user you don't have access to network resources in that case. And that would only really be viable if I could easily change the logon screen to make VERY clear to users that installs are underway and to not logon until they are complete and the logon screen is back to normal. Which I think is really not easily doable because Microsoft makes it near impossible to make temporary changes/messaging on the logon screen.
I also know I can do a one time request for credentials on the machine, and save those credentials as a secure file. From then on I can access those credentials so long as I am logged in as the same user. But that then suggests rebooting with automatic logon as a specific user. And so far as I can tell, doing that requires a clear text password in the registry. Once I have credentials as a secure file, is there any way to trigger a reboot and one time automatic logon using those secure credentials? Or is any automatic reboot and logon always a less than secure option?
EDIT: I did just find this that seems to suggest a way to use HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon without using a plain text DefaultPassword. The challenge is figuring out how to do this in PowerShell when you don't know C#. Hopefully someone can verify this is a viable approach before I invest too much time in trying to implement it for testing. :)
And, on a related note, everything I have read about remote PowerShell jobs and the Second Hop Problem suggests the only "real" solution is to use CredSSP, which is itself innately insecure. But it is also a lot of old information, predating Windows 10 for the most part, and I wonder if that is STILL true? Or perhaps was never true, since none of the authors claiming CredSSP to be insecure explained in detail WHY it was insecure, which is to me a red flag that maybe someone is just complaining to get views.
TLDR: How do I automate uninstall of all drivers in two categories without needing to know the OEM number beforehand?
First things first - I'm as far from an expert as they come. I'm an L1 support desk grunt messing with powershell to try automate the tedious parts of my job. A persistent issue we've got with 90% of our machines requires uninstalling all drivers for audio devices, and because I'm too lazy to do this in a remote session, I'm trying to automate it through a script that fires off a bunch of commands through psexec to a specified hostname.
Downside is driver name is not always going to be the same on each machine, and the OEM number for the drivers isn't consistent across multiple machines either. This doesn't matter when you're doing it through device manager - just need to uninstall everything in the Audio I/O and Sound Controllers dropdown - but I've no idea how to specify this in command line.
I'm sure it's possible. I've been poking around at pnputil and Get-WindowsDriver and there's gotta be some way to do it. Might be something with wmic that could work, but I'm not familiar enough with that command. I could just do it manually, but then I'd have to spend five minutes in a laggy remote session making small talk with a user, and I can't stand small talk.
So essentially my question is: Is there a way to query OEM info of every driver in a specific category, and then pipe that info into a cmdlet that'll uninstall them?
I'm unsure if what I'm trying to do is possible. I have a PowerShell script that takes three arguments. In a perfect world, I'd collect the necessary information in a web form and pass it to the script, which would then run.
I don't know if this is even possible, but I can't find anything that definitively tells me no. I'd need it to be cross-browser capable (we have some Macs) so I can't just do an IE-only fix.
This is also internal only, so I'm less concerned about some security risks. It will be behind our firewall.
Thanks.
Not sure to understand the question, but for collecting the necessary information in a web form and pass it to the script, which would then run I user PowerShell Web Server (tag : poshserver in stackoverflow). Sources are available, so I even implements Jobs for long scripts.
Please excuse a newbie question, but I've always used SVN and more recently, Git. Just now am touching TFS for the first time.
If I have two different machines that I work on regularly, can I safely keep the project files in sync using something like Dropbox/Sugarsync/Skydrive?
Are there any pros/cons to be aware of?
(I know that some of you might ask something like why not just checkout on the other machine. Just trying to save a step. I want to just pick up the other machine and do what I need to do without having to check out anything.)
TFS workspaces contain information about the machine name and user that created them, however if you're using local workspaces and you're not putting any server-side locks on files then I suppose you could sync them via dropbox and it should probably work just fine.
That said, I'd never recommend it.
You're not only going to sync all your code but also all the binaries that you're producing each and every time you compile, plus you won't have any change history between machines and you need to keep monitoring the drop box app to make sure things have synced fully before switching machines.
If you want to move changes between two machines I'd recommend using shelvesets. It only takes a few seconds to do and you'll have a more explicit update process between machines. You can be sure of what is happening in your code on each machine and you have an implicit rollback point if you realise you put something in the shelveset you didn't want.
Our shop has developed a few WEB/SMS/DB solution for a dozen client installations. The applications have some real-time performance requirements, and are just good enough to function properly. The problem is that the clients (owners of the production servers) are using the same server/database for customizations that are causing problems with the performance of the applications that we created and deployed.
A few examples of clients' customizations:
Adding large tables with many text datatypes for the columns that get cast to other data types in the queries
No primary keys, indexes, or FK constraints
Use of external scripts that use count(*) from table where id = x, in a loop from the script, to determine how to construct more queries later in the same script. (no bulk actions that the planner can optimize or just do everything in a single pass)
All new code files on the server are created/owned by root, with 0777 permissions
The clients don't take suggestions/criticism well. If we just go ahead and try to port/change the scripts ourselves, the old code can come back, clobbering any changes that we make! Or with out limited knowledge of their use cases, we break functionality while trying to optimize their changes.
My question is this: how can we limit the resources to queries/applications other that what we create and deploy? Are there any pragmatic options in scenarios like this? We prided ourselves in having an OSS solution, but it seems that it's become a liability.
We use PG 8.3 running on a range on Linux Distos. The clients prefer php, but shell scripts, perl, python, and plpgsql are all used on the system in one form or another.
This problem started about two minutes after the first client was given full access to the first computer, and it hasn't gone away since. Anytime someone whose priorities are getting business oriented work done quickly they will be sloppy about it and screw up things for everyone. That's just how things work, because proper design and implementation are harder than cheap hacks. You're not going to solve this problem, all you can do is figure out how to make it easier for the client to work with you than against you. If you do it right, it will look like excellent service rather than nagging.
First off, the database side. There's now way to control query resources in PostgreSQL. The main difficulty is that tools like "nice" control CPU usage, but if the database doesn't fit in RAM it may very well be I/O usage that is killing you. See this developer message summarizing the issues here.
Now, if in fact it's CPU the clients are burning through, you can use two techniques to improve that situation:
Install a C function that changes the process priority (example 1, example 2) and make sure whenever they run something it gets called first (maybe put it into their psql config file, there are other ways).
Write a script that looks for postmaster processes spawned by their userid and renice them, make it run often in cron or as a daemon.
It sounds like your problem isn't the particular query processes they're running, but rather other modifications they're making to the larger structure. There's only one way to cope with that: you have to treat the client like they're an intruder and use the approaches of that portion of the computer security field to detect when they screw things up. Seriously! Install an intrusion detection system like Tripwire on the server (there are better tools, that's just the classic example), and have it alert you when they touch anything. New file that's 0777? Should jump right out of a proper IDS report.
On the database side, you can't directly detect the database being modified usefully. You should do a pg_dump of the schema every day into a file (pg_dumpall -g and pg_dump -s, then diff that against the last one you delivered and again alert you when it's changed. If you manage that this well, the contact with the client turns into "we noticed you changed on the server...what is it you're trying to accomplish with that?" which makes you look like you're really paying attention to them. That can turn into a sales opportunity, and they may stop fiddling with things as much just knowing you're going to catch it immediately.
The other thing you should start doing immediately is install as much version control software as you can on each client box. You should be able to login to each system, run the appropriate status/diff tool for the install, and see what's changed. Get that mailed to you regularly too. Again, this works best if combined with something that dumps the schema as a component to what it manages. Not enough people use serious version control approaches on the code that lives in the database.
That's the main set of technical approaches useful here. The rest of what you've got is a classic consulting client management problem that's far more of a people problem than a computer one. Cheer up, it could be worse--FSM help you if you give them ODBC access and they discover they can write their own queries in Access or something simple like that.