Passing Parameters and Running Powershell Script - powershell

I'm unsure if what I'm trying to do is possible. I have a PowerShell script that takes three arguments. In a perfect world, I'd collect the necessary information in a web form and pass it to the script, which would then run.
I don't know if this is even possible, but I can't find anything that definitively tells me no. I'd need it to be cross-browser capable (we have some Macs) so I can't just do an IE-only fix.
This is also internal only, so I'm less concerned about some security risks. It will be behind our firewall.
Thanks.

Not sure to understand the question, but for collecting the necessary information in a web form and pass it to the script, which would then run I user PowerShell Web Server (tag : poshserver in stackoverflow). Sources are available, so I even implements Jobs for long scripts.

Related

Trigger reboot and script execution, securely

I am using PowerShell to manage Autodesk installs, many of which depend on .NET, and some of which install services, which they then try to start, and if the required .NET isn't available that install stalls with a dialog that requires user action, despite the fact that the install was run silently. Because Autodesk are morons.
That said, I CAN install .NET 4.8 with PowerShell, but because PowerShell is dependent on .NET, that will complete with exit code 3010, Reboot Required.
So that leaves me with the option of either managing .NET separately, or triggering that reboot and continuing the Autodesk installs in a state that will actually succeed.
The former has always been a viable option in office environments, where I can use Group Policy or SCCM or the like, then use my tool for the Autodesk stuff that is not well handled by other approaches. But that falls apart when you need to support the Work From Home scenario, which is becoming a major part of AEC practice. Not to mention the fact that many/most even large AEC firms don't have internal GP or SCCM expertise, and more and more firm management is choosing to outsource IT support, all to often to low cost glorified help desk outfits with even less GP/SCCM knowledge. So, I am looking for a solution that fits these criteria.
1: Needs to be secure.
2: Needs to support access to network resources where the install assets are located, which have limited permissions and thus require credentials to access.
3: Needs to support remote initiation of some sort, PowerShell remote jobs, PowerShell remoting to create a scheduled task, etc.
I know you can trigger a script to run at boot in System context, but my understanding is that because system context isn't an actual user you don't have access to network resources in that case. And that would only really be viable if I could easily change the logon screen to make VERY clear to users that installs are underway and to not logon until they are complete and the logon screen is back to normal. Which I think is really not easily doable because Microsoft makes it near impossible to make temporary changes/messaging on the logon screen.
I also know I can do a one time request for credentials on the machine, and save those credentials as a secure file. From then on I can access those credentials so long as I am logged in as the same user. But that then suggests rebooting with automatic logon as a specific user. And so far as I can tell, doing that requires a clear text password in the registry. Once I have credentials as a secure file, is there any way to trigger a reboot and one time automatic logon using those secure credentials? Or is any automatic reboot and logon always a less than secure option?
EDIT: I did just find this that seems to suggest a way to use HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon without using a plain text DefaultPassword. The challenge is figuring out how to do this in PowerShell when you don't know C#. Hopefully someone can verify this is a viable approach before I invest too much time in trying to implement it for testing. :)
And, on a related note, everything I have read about remote PowerShell jobs and the Second Hop Problem suggests the only "real" solution is to use CredSSP, which is itself innately insecure. But it is also a lot of old information, predating Windows 10 for the most part, and I wonder if that is STILL true? Or perhaps was never true, since none of the authors claiming CredSSP to be insecure explained in detail WHY it was insecure, which is to me a red flag that maybe someone is just complaining to get views.

Invoking a script as part of a web api method : how bad an idea is it?

I have a powershell script (but I think these considerations could be extended to any script that requires a runtime to interpret and execute it) that does what I also need to expose to a web application front end as a REST API to be called and I've been asked to call directly the script itself from the web method but although technically feasible, having a web api method that starts a shell/process to execute the script and redirecting stdin/stdout/stderr looks like a very bad practice to me. Is there any specific security risk in doing something like this?
Reading this question brings to mind how many of the OWASP Top Ten Security Vulnerabilities it would expose your site to.
Injection Flaws - This is definitely a high risk. There are ways to remediate it, of course. Parameterizing all input with strongly-typed dates and numbers instead of strings is one method that can be used, but it may not fit with your business case. You should never allow user-provided code to be executed, but if you are accepting strings as input and running a script against that input, it becomes very difficult to prevent arbitrary code execution.
Broken Authentication - possibly vulnerable. If you force a user to authenticate before reaching your script (you probably should), there is a chance that the user reuses their credentials elsewhere and exposes those credentials to a brute force attack. Do you lock out accounts after too many tries? Do you have two-factor authentication? Do you allow weak passwords? These are all considerations when you introduce a new authentication mechanism.
Sensitive data exposure - likely vulnerable, depending on your script. Does the script allow reading files and returning their contents? If not now, will it do so in the future? Even if it's never designed to do so, combined with other exploits the script might be able to read a file from a path that's outside the web directory. It's very difficult to prevent directory traversal exploits that would allow a malicious user access to your server, or even the entire network. Compiled code and the web server prevent this in many cases.
XML External Entities - possibly vulnerable, depending on your requirements. If you allow user-provided XML, the bad guy can inject other files and create havoc. This is easier to trap when you're using standard web tools.
Broken Access Control - definitely vulnerable. A Web API application can enforce user controls, and set permission levels in a C# controller. Exceptions are handled with HTTP status codes that indicate the request was not allowed. In contrast, Powershell executes within the security context of the logged in user, and allows system-level changes even if not running escalated. If an injection flaw is exploited, the code would be executed in the web server's security context, not the user's. You may be surprised how much the IIS_USER (or other Application Pool service account) can do. For one, if the bad guy is executing in the context of a service account, they might be able to bring down your whole site with a single request by locking out that account or changing the password - a task that's much easier with a Powershell script than with compiled C# code.
Security Misconfiguration - likely vulnerable. A running script would require it's own security configuration outside whatever framework you are using for the Web API. Are you ready to re-implement something like OAuth Claims or ACLs?
Cross-Site Scripting - likely vulnerable. Are you echoing the script output? If you're not sanitizing input and output, the script could echo some Javascript that sends a user's cookie content to a malicious server, giving them access to all the user's resources. Cross site request forgery is also a risk if input is not validated.
Insecure Deserialization - Probably not vulnerable.
Using Components with Known Vulnerabilities - greatly increased vulnerability, compared to compiled. Powershell grants access to a whole set of libraries that would otherwise need explicit references in a compiled application.
Insufficient Logging & Monitoring - likely vulnerable. IIS logs requests by default, but Powershell doesn't log anything unless you explicitly write to a file or start a transcript. Neither method is designed for concurrency and may introduce performance or functional problems for shared files.
In short, 9 out of the top 10 Vulnerabilities may affect this implementation. I would hope that would be enough to prevent you making your script public, at the very least. Basically the problem is that you're using the tool (Powershell) for a purpose it wasn't intended to fulfill.

Way to pull Exchange permissions

Maybe an easy question for someone who knows Powershell and O365 well. Is there a way to configure it so when a command is run for example to pull all access to a shared mailbox, that either a service account is permissioned each time to pull that information or the user who is running the script? I looked at connecting an SA to the script but it would have too much access to 0365 to give it the specific permissions. So the account is not permissioned for the access by default but every time the script/command is ran its permissioned for that inquiry which it shows then it won't have access until the next time its called.
Looking to add this type of function to a script which we only want the helpdesk people to see the information when they run the script and the specific command in the script.
Hopefully explained clear enough :)
Thanks all.
I don't think there is a way to do that natively. You could fiddle something with Azure PIM but that's more for one-off operations than minute action that are done often.
You could however circumvent that by making some sort of web interface that triggers commands on another server using a privileged SA and returns the output through the web interface. You can just make it so that the interface can only request one specific command to be run, and the only thing you have to worry about is sanitizing your parameters well to avoid unwanted injection.
Alternatively, what are you trying to protect against by restricting access so much ? Isn't it something that could be done more easily using a read-only account and some clearly defined policy ? If your helpdesk people overstep their allowed scope, that's a management/HR problem as much as a technical one.

POST an HTML form to a Powershell script

I just need a plain static .html page form, to POST to a Powershell script.
I've seen plenty of Powershell Invoke-WebRequest cmdlet material, but where Powershell is always initiating the HTTP request (and then handling the HTTP response..)
Thank you!
The short answer is that you cannot POST directly to a PowerShell script. When you POST to a website you are passing arguments to the web server that are then being presented to code on that web server ( the target of your POST request ) that the web server is capable of executing. Web servers do not understand PowerShell ( unless Microsoft has implemented this, which a few quick googles suggests they haven't ).
That being said, your ultimate goal is likely that you want to consume data that you sourced from a form via a PowerShell script. You will need to implement a backend on the webserver to consume the POST request and pass it to the operating system level to be run via PowerShell. This is generally not a good idea but if you are doing it for an internal site to get something running quickly then so be it.
Here is the process to call a Powershell script from ASP.Net: http://jeffmurr.com/blog/?p=142
You could approach this problem in many other ways. You could write your backend site to save the data from the POST request to a file and come along and parse that file on a schedule with PowerShell. You could use a database in the same manor or you could create a trigger in the database to run the script each time a row is appended.
I suspect that if you work down one of these pathways you will ultimately find that the technology you are using on the backend ( like ASP.Net or PHP or JavaScript ) is capable of doing the work you need done and that you would have far less moving parts if you stuck with one of those. Don't be afraid to learn something new. Jumping to JavaScript from PowerShell is not that difficult.
And the world moves to fast. Here is a NodeJS-like implementation of a webserver in PowerShell.
https://gallery.technet.microsoft.com/scriptcenter/Powershell-Webserver-74dcf466

Perl application move causing my head to explode...please help

I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.