I want to be able to automate startup and shutdown of a Windows XP VM running under Hyper-V on Windows 2008.
The VM should only be available during office hours. Its a standard Windows XP (SP3) installation. So the VM should startup at 8am and shutdown at 6pm (regardless of any running applications) according to a schedule that I can easily configure.
I've looked at a batch job under the VM itself to shutdown,
(something like at 18:00 every M,T,W,Th,F shutdown /l/y/c)
but I can't work out how to get it to start up again - possibly something under Hyper-V could be used?
And it would be nice to control both startup and shutdown from the same place
You can use Hyper-v PowerShell library from Codeplex to create your own PowerShell script to save (suspend) and start guest machines within the required time with scheduled task.
Related
I am new to postgresql, I went through the Postgres documentation which provides details for the active passive server setup.
What I want is the scheduler in spring boot should run only on server which is pointing to active mode, and in passive mode the schedular should not run.
As soon as active goes down and passive takes over as new active, then jobs should start running on new server.
I just have 2 jobs , one runs each 5 mins and one runs everyday at 1AM.
Need help in achieving this
I have a Node.js web server which, as part of a CD process, I want to deploy to a staging server using Azure Release Pipeline. The problem is, that if I just run a Powershell script:
# Run-Server.ps1
node my-server.js
The Pipeline will hold since the node process blocks the Powershell session.
What I want is to be able to launch the service, and then in the next deployment just kill the node process and run it again with the new code.
So I figured I'll use Start-Process. If I run it locally:
> Start-Process node -ArgumentList ./server.js
I can now exit the Powershell session and the server will continue running. So I thought I can implement it the same way in my Release Pipeline.
But it turns out that once the Release Pipeline ends running, the server is no longer available - the node process is gone.
Can you help me figure out why is that? Is there another way of achieving this? I suppose it's a pretty common use case so there must be best-practices out there regarding to how this should be done.
Another way to achieve this is to use a full-blown web server to host andmanage node process. I.e. on Windows you could use IIS with iisnode module. This is more reliable and gives you a few other benefits:
process management (automatic start, restart on failure, etc.)
security - you can configure the user that node process will run as
scalability on multi-core CPUs
Then the process of app deployment would be just copying files to the right directory - the web server should pick up the change automatically.
By default, A pipeline job cleans up all of the child processes it spins up when it exits. This is killing your node server.
Set Process.Clean variable to false to override the default behavior.
When I publish my application to IIS server, quartz scheduler stop working after some time.In my local machine IIS server it works fine.
I need to perform some functionality every day at 11:55 pm.
By default, IIS recycles an application pool after some inactivity time. I guess this is your problem. The application will be just shutdown if nobody uses it.
While it is possible to make the application pool in IIS to run forever, it is still better to not schedule background tasks using web applications.
Use windows services or just simple Windows Task Scheduler for scheduling.
There are a couple of good solutions for scheduling background tasks with C# in .Net:
Topshelf
Hangfire
Here is a nice topic about using both solutions "Setting up windows service with Topshelf and HangFire"
I have written a PowerShell script that I want to run in a daily basis. I want now to create an alert for the same in the system.
But since this needs to be done in not only my computer but also in my teammates' computers (around 10 computers) I was thinking whether it is possible to write a script that when it runs, it automatically schedules a task in the system.
I am using GlassFish 4.0 in a cluster configuration with two nodes and each node has one instance. The DAS and two instances are setup as Window 7 services that use a logon account that has administrator privileges. Upon starting the machine the service starts and the DAS is up along with the instances. From Windows Task Manager this appears as two java.exe processes per service for a total of 6 java.exe's. The problem is that if I use the asadmin restart-domain command two new java.exe processes spawn and the two old ones do not die. The application that is deployed works fine, but with enough restarts using asadmin, the memory starts to fill up from zombie java.exe's. Oddly enough running asadmin stop-domain will stop the two DAS java.exe processes but then running asadmin start-domain starts Glassfish as a non-service. The only way to start the DAS back as a service is to run "sc start domain1" or restart the machine. Also, the only way to stop the DAS java.exe processes is with asadmin; stopping the service using "sc stop domain1" stalls and does not work. It is also odd that each service (DAS, instance 1 and 2) starts two java.exe's, vs starting only one each when running as a non-service.
Is there any additional service wrapper configuration that needs to be done, or asadmin options that need to be passed in when running asadmin commands on GlassFish 4.0 running as a service?
These may be helpful. The implementation for 4 is the same as 3.1
https://blogs.oracle.com/foo/entry/automatic_starting_of_servers_in
https://blogs.oracle.com/foo/entry/automatic_starting_implementation_details_for