I want to send one command to 12 different servers at the same time, i also am trying to use it with a discord bot, which uses commands (Not necessary).
Related
My setup: I have one computer running an application, foo, and logging information using rsyslog to a file on another remote machine. The remote machine is running a different application, bar. bar reads the logged files it has received and does some processing with this information, however, this is slow.
What I'm trying to do: I would like to pipe the information from foo's log file into the process bar directly. I suppose I theoretically could alter foo's source code to support something like this to even bypass rsyslog or writing to a file locally, but it is a massive enterprise-level software and would be a last resort.
I do know about the double hop issue. My scenario is: I have a script I want to run remotely that calls another script located on a network share that calls a third script located on a second network share in a different domain.
Currently what I am doing is using Credssp (I've read there can be security issues but this environment is not public facing) to pass credentials for the 1st network share that has script2. I do not have access to the computer with the second domain so I cannot setup credssp on it. In order to work around that, inside of the script2 I am using "net use" command on the third script in order for the script to be able to find the path. I am then using "Copy-Item" to copy the third script on to the machine running script2 (the remote machine).
Up to this step, everything is working when I run script1. I can see script3 is copied over onto the remote machine. When script3 is called, it should make a web request that sends text to stdout (which I pipe to Out-File in script2). However, whenever I try to run the copy of script3 (located on the remote machine) from script2 (running on the remote machine) it does not seem to do anything. If I run script2 locally on the remote machine then it works fine (file is generated from script3's output).
Any idea's on why this won't work? I've tried running script 3 using several variations of invoke-expression, invoke-command, start-process, and even trying to run with cmd. I'm also having trouble getting output on what exactly is causing the issue (stdout and stderr are many times empty when using the different commands). Am I missing some command or tool that may make this easier to troubleshoot? It almost seems like script3 is still running into a double hop issue despite it only making a web request? And if it was running into that, I thought it would have had an error returned.
There my be a better design for doing what I'm trying to do. I'm fairly new to PowerShell and may be over complicating this.
Edit: Rewrote my scripts in python and got it working.
I am looking to collect data snapshot on a random interval from various machines in our network that we don't own, but may get access to install an agent to collect these data.
These machines are either in a domain or work-group and kind of data i get are based on the role they play and information they have. The machines are "Windows Server 2003" and above and I do not want to install anything on those machines before i get started, so thought I can use the PowerShell scripts that I can remote invoke form my server and pass the script it has to run to return the data.
I was wondering if this is possible to do that with the PowerShell scripts and as this is supposed to run in a secure environment, is there any major security implications with this approach. i.e. do I need to do anything on the client machines that can make them vulnerable to security threats.
BTW these machines are not exposed to internet and are behind a firewall.
I would appreciate if you point me to any other alternatives that can be useful for my analysis.
Regards
Kiran
I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)
I'm writing a perl script for a website, and I need to be able to control VirtualBox via the website. I'm not sure where to start, or if I'm even trying to debug in the right area, but here goes.
My server is running IIS7 on Windows Server 2008 R2. I'm also running 2 virtual machines through the vboxmanage command line interface. These VMs are running under SERVER\administrator.
When I open my website, it requests a login. I login to the website as SERVER\administrator and click a link that calls my script using an xmlhttprequest. Now, normally, it doesn't matter what user I run these as, but with vboxmanage, if I run the command as a different user, the list of VMs is different. I tried whoami, which returned SERVER\administrator, but %DOMAINNAME%\%USERNAME% returns the domain that the server is connected to as dommainname and SERVER$ as the username. The vboxmanage command then fails.
On the website, impersonation is turned on. When I turn impersonation off, the whoami request changes to be iis apppool\website. Any ideas on how to get around this?
As a final note, I've thought about using runas, but since it prompts for a password, there's no way to call it through scripting (and that would be a poor security decision, I'd imagine).
This is an oft recurring, well-known and well-solved problem. Instead of having one big program dealing with requests from the Web and managing the VM (strong coupling), separate the concern and write two programs, each doing exactly one task.
The user facing program running in the Web server context can continue with limited privileges. The VM manager is a stand-alone program running with the necessary admin privileges, either repeatedly from the scheduler or as daemon/service.
Have the first communicate with the second over a message-queue.