I am looking to collect data snapshot on a random interval from various machines in our network that we don't own, but may get access to install an agent to collect these data.
These machines are either in a domain or work-group and kind of data i get are based on the role they play and information they have. The machines are "Windows Server 2003" and above and I do not want to install anything on those machines before i get started, so thought I can use the PowerShell scripts that I can remote invoke form my server and pass the script it has to run to return the data.
I was wondering if this is possible to do that with the PowerShell scripts and as this is supposed to run in a secure environment, is there any major security implications with this approach. i.e. do I need to do anything on the client machines that can make them vulnerable to security threats.
BTW these machines are not exposed to internet and are behind a firewall.
I would appreciate if you point me to any other alternatives that can be useful for my analysis.
Regards
Kiran
The reason I'm asking is I would like to use the out-of-proc mode, but I cannot install a service on each user's workstation, only on a central server. Is the communication between event source and listener service an ETW thing, or is there some kind of RPC I could use?
Yes, the out-of-process mode works by using ETW. All ETW events are system wide so the service just has to listen to ETW events.
ETW only works locally and does not offer a remote solution you could use. Your options are to install a service on each workstation, listen to ETW events (here or here) and forward them to your server with a RPC solution you build yourself. Using MSMQ comes to mind. Or have your application forward the events to your server directly so you don't need the service. Either way, you will have to build it yourself.
To add to Lars' answer, you could also log to SQL. There is a SQL sink you can use but like everything else, to get the most customized fit, you would build your own (or inherit from another class to give you a good starting point). Be careful though. Not all sinks are created the same. They all have their pros and cons. For example, with SQL and Azure sinks, you have to worry about high latency. The XML formatter doesn't write the root starting and ending node so it's not well-formed xml. Whatever reads that file would have to provide them. Good luck!
I have a C# application that enables users to write a test and execute it (client). It also supports distributed execution over multiple machines using a central server and agents on said machines.
The agent is practically a duplication of the original execution ability but it is in a standalone solution.
We'd like to refactor that because:
Code duplication.
If a user will try to write and execute on a machine that runs an agent, there will be a problematic collision.
I'm considering 2 options:
Move this execution to a service, that both client and agent will use. I mean a service that will run locally, not a web service.
Merge client and agent - we'll have no agent, but the server will communicate with the client as an agent instead.
I have no experience in working with services. Are there any known advantages/disadvantages to either options?
A common library shared by both client and agent sounds more appropriate to allow simple cases such as just using the client and avoid the overhead of having to set up an extra service locally.
Recently I tried to enumerate the Windows Services on the VM where my Azure web role instance runs using ServiceController.GetServices() - there's a lot of them including Telephony and CloudDrive which I don't need and so having them started is a waste of resources.
Is it possible to have them not started?
Yes, but you'll need a startup task to do this. Here is what you'll do to stop and disable the Telephony service:
sc.exe stop TapiSrv
sc.exe config TapiSrv start= disabled
As you can see I'm not using the display name (Telephony) but I'm using the service name (TapiSrv). If you want to get a list of service names for your system you can simply execute this command (in Azure you can do this via RDP):
sc.exe query
Executing this command will also give you the state of the service (running, ...).
Note: When calling sc.exe config you need to put a space after the equals sign.
Note: Stopping services can take some time, so I suggest you use a background task to stop/disable the services, in order to keep the startup time of your instance to a minimum.
What are the different approaches for creating scheduled tasks for web applications, with or without a separate web/desktop application?
If we're talking Microsoft platform, then I'd always develop a separate Windows Service to handle such batch tasks.
You can always reference the same assemblies that are being used by your web application to avoid any nasty code duplication.
Jeff discussed this on the Stack Overflow blog -
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
Basically, Jeff proposed using the CacheItemRemovedCallback as a timer for calling certain tasks.
I personally believe that automated tasks should be handled as a service, a Windows scheduled task, or a job in SQL Server.
Under Linux, checkout cron.
I think Stack Overflow itself is using an ApplicationCache expiration to run background code at intervals.
If you're on a Linux host, you'll almost certainly be using cron.
Under linux you can use cron jobs (http://www.unixgeeks.org/security/newbie/unix/cron-1.html) to schedule tasks.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
I think Windows' built-in Task Scheduler is the suggested tool for this job. That requires an outside application.
This may or may not be what you're looking for, but read this article, "Simulate a Windows Service using ASP.NET to run scheduled jobs". I think StackOverflow may use this method or it was at least talked about using it.
A very simple method that we've used where I work is this:
Set up a webservice/web method that executes the task. This webservice can be secured with username/pass if desired.
Create a console app that calls this web service. If desired, you can have the console app send parameters and/or get back some sort of metrics for output to the console or external logging.
Schedule this executable in the task scheduler of choice.
It's not pretty, but it is simple and reliable. Since the console app is essentially just a heartbeat to tell the app to go do its work, it does not need to share any libraries with the application. Another plus of this methodology is that it's fairly trivial to kick off manually when needed.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
You can also tell cron to run php scripts directly, for example. And you can set the permissions on the PHP file to prevent other people accessing them or better yet, don't have these utility scripts in a web accessible directory...
Java and Spring -- Use quartz. Very nice and reliable -- http://static.springframework.org/spring/docs/1.2.x/reference/scheduling.html
I think there are easier ways than using cron (Linux) or Task Scheduler (Windows). You can build this into your web-app using:
(a) quartz scheduler,
or if you don't want to integrate another 3rd party library into your application:
(b) create a thread on startup which uses the standard Java 'java.util.Timer' class to run your tasks.
I recently worked on a project that does exactly this (obviously it is an external service but I thought I would share).
https://anticipated.io/
You can receive a webhook or an SQS event at a specific scheduled time. Dealing with these schedulers can be a pain so I thought I'd share in such case someone is looking to offload their concerns.