I'm making a makeshift CI/CD system for my app, the app stops itself when notified of a push to a Github repo and the script automatically runs git pull to bring in changes and some more commands depending on the things that changed. Some of the changes could be to the script.
I want the script to restart itself, without infinite nesting where it could hog resources.
While ($true) {
git pull
# check for changes...
If ($runScriptChanged) {
Break
}
node index.js
}
# ???
Omitted error-checking parts and other updating parts for brevity
Calling itself will probably work, but again, it could hog resources infinitely until stopped
Making a new file to run the above script still leaves a file in the repo that cannot be updated automatically
Start-Process is the best I've found for this, but I'm not sure about it's behavior on Linux
When does the launching shell close? Is it the same as on Windows with -NoNewWindow (where it will stay open, as long as there's something using it)? (Currently I'm running it on Windows Server, so compatibility with Linux isn't a big concern, but it is nice to have)
Which way should I use? Thanks
You may consider using PowerShell jobs and the Start-Job cmdlet. It will start your processes in the background and also has some monitoring and management capabilities using other -Job cmdlets such as Get-Job, Wait-Job,Stop-Job, etc.
See about_Jobs for more information.
Related
I am fairly new to writing code in Powershell. For my job I have to write multiple Powershell scripts to make changes in the Hardware and Software settings as well as the Registry and Group Policy Editor to get these applications to run. These applications are a little older. Upgrading these software applications or the hardware then run on is NOT an option. as an example, when Microsoft releases the new patches on like Patch Tuesday...when those patches are applied there is a high probability that something will be changed which is where I come in to write a script to fix the issue. I have multiple scripts that I run. When those scripts are ran they may end up terminating because of an Error Code or an Exit Code. A large part of the time I do not that the script has failed immediately.
I am trying to figure out a script that I can run in a 2nd PowerShell Console Window. I am thinking that the only purpose of this script is to just sit there on the screen and wait and monitor. Then when I execute a script or Application (the only file extensions that I am worried about are: EXE, BAT, CMD, PS1) if the script/application that I just ran ends with an exit code or an error code....then output that to the screen...in REAL TIME.
Below, I have a small piece of code that kind of works, but it is not what I am wanting.
I have researched online and read and read tons of stuff. But I just can't seem to find what I am looking for.
Could someone please help me with getting a script that will do what I am wanting.
Thank you for your help!!!!
$ExitErrorCode =
"C:\ThisFolder\ThatFolder\AnotherFolder\SomeApplication.EXE # (this
would
# either be an EXE or CMD or BAT or PS1)"
$proc = Start-Process $ExitErrorCode -PassThru
$handle = $proc.Handle # cache proc.Handle
$proc.WaitForExit();
if ($proc.ExitCode -ne 0) {
Write-Warning "$_ exited with status code $($proc.ExitCode)"
}
Possible duplicate of the approaches shown here:
Monitoring jobs in a PowerShell session from another PowerShell session
Monitoring jobs in a PowerShell session from another PowerShell session
PowerShell script to monitor a log and output progress to another
PowerShell script to monitor a log and output progress to another
I'm researching these days on how to can I keep a powershell process alive so I can run PS code without opening a new process each time.
The need:
- Running multiple PS scripts dynamically, so they have the same base (custom) modules, as efficient as possible.
Be able to communicate with stdout/stdin/stderr of these scripts process while it is still running.
Ideally I'd want one process to open with a docker, import my modules, and then collect the code itself to run, run it in the same process as the one already opened so it won't have to open another process nor import again my modules.
The problem:
- Setting up PS process in a docker container takes tremendous amount of time. (Roughly 2.5s, before I have even begun to run any code, and I'm talking about the PS process alone)
As of yet I did not find a PS way to run dynamic code on the same process without creating a new process & importing my modules again. Nor did I find a way to dynamically communicate with the new process while it still runs.
Possible Solutions:
- Create the initial PS process with -noprofile so it won't load so slowly. (I am yet to test this, but folks on redit seems to be approving of this method)
- Use start-process with -NoNewWindow flag, so it will generate new process each time but I guess the initial setup time will be spared.
- Trying to use Invoke-Expression on big chunks of code, but from what I understand that is not recommended and probably won't let actively communicate with the code running there until it finishes.
Start-Process -NoNewWindow
And
Invoke-Expression
Are the only relevant mechanisms I could find so far.
I've been told AWS lambda features similar functionality as what I am trying to achieve, but looking at it's code I did not make much progress, figured might be worth asking for help from people who are smarter then me :) Any help would be much appreciated.
I do not seek for already fully working 3ed side solution, simply being able to mimic that behavior in PS code would be good enough for me.
I currently have a pretty simple Powershell Script that creates an IO.FileSystemWatcher object, and calls an executable upon that event being triggered.
I can run this script without issue from Administrator Powershell on my 2012 Windows Server, however it seems to run into issues when I have my script being run from Task Scheduler.
I've attempted running the task while logged on, and on a trigger while I'm logged off and in both instances the Event status reads: "Running" when I check. However interacting with the folder that should be watched produces no results. I've added a log file to document which parts of the code are functioning and the script DOES create the event, however it is the event triggering that seems to be the issue. Has anyone heard of an issue with creating events through Task Scheduler?
I've read some forums that say it might be a domain user issue
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa
Change the ‘REG_DWORD’ with ValueName ‘disabledomaincreds’ to a Value to “0
Although this was already the case, and I've tried multiple variations of settings in the Task Properties as per Scripting Guy and SpiceWorks. The general consensus I've found is that it needs to be ran with a -NoExit argument in order for the event to properly run when the user is not logged in.
Extra notes:
Powershell script is located on a network location rather than physically on the computer (\serverName\FTP\Folder\script.ps1
I came across the same problem. I don't know why this works, but in your Scheduled Task, when referring to the PowerShell Script, instead of using
\serverName\FTP\Folder\script.ps1
use
. \serverName\FTP\Folder\script.ps1
(noting the .).
As I understand, as a powershell novice, the events you register with FileSystemWatcher will only fire if the powershell instance is still running. I wouldn't trust that task manager says the task is running since it is notoriously unreliable, which seems to be the Microsoft standard. I think once your script finishes executing it kills the powershell instance and all event listeners are garbage collected.
I just put my script to sleep forever and it works. At the end of my script, it has
while ($true) {sleep 1}
It probably wouldn't hurt to increase the sleep time, but this works.
I'm a nub scripter and am trying to write a really simple script to taskkill 2 programs and then uninstall 1 of them.
I wrote it in Powershell and stuck it in SCCM for deployment...however every time I deploy it, it's not running the last line to uninstall the program.
Here's the code:
# Closing Outlook instance
#
taskkill /IM outlook.exe /F
#
# Closing Linkpoint instance
#
taskkill /IM LinkPointAssist.exe /F
#
# Uninstalling Linkpoint via uninstall string if in Program Files
#
MsiExec.exe /X {DECDCD14-DEF6-49ED-9440-CC5E562FDC41} /qn
#
# Uninstalling Linkpoint via WmiObject if installed manually in AppData
Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
#
Exit
Can someone help? SCCM says the script completes with no error and I know it's able to execute it since the taskkills work...but it's not uninstalling the program.
Thanks in advance for any input.
So, SCCM is running this script, and nothing in the script is going to throw an error.
If you want to throw an error which SCCM can return to know how the deployment went, you need to add an extra step.
$result = Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
if ($result.ReturnValue -ne 0){
[System.Environment]::Exit(1603)
}else
{
[System.Environment]::Exit(0)
}
I see a lot of these kinds of questions come through on SO and SF: Someone struggling with unexpected behavior of an application, script, or ConfigMgr and very little information about the assumptions I can make about their environment. At that stage, it would typically be days of interaction to narrow the problem to a point where a solution is possible.
I'm hoping this answer can serve as a reference for future such questions. The first question to OP should be "Which of these 9 principles are you violating?" You could think of it as a sort of Joel Test for ConfigMgr application packaging.
Nine Steps to Better ConfigMgr Application Packages
I have found that installing and uninstalling applications reliably using ConfigMgr requires carefully sticking to a bunch of principles. I learned these principles the hard way. If you're struggling to figure out why an application is not working right under ConfigMgr, odds are that you will answer "no" to one of the following questions.
1. Are you testing the entire lifecycle?
In order to have any hope of reliably managing an application you need to test the entire lifecycle of an application. This is the sequence I test:
Detect: make sure the detection script result is negative
Install: install the application using your installation script
Detect: make sure the detection script result is positive when run
Uninstall: uninstall using your uninstallation script
I run this sequence repeatedly making tweaks to each step until the whole sequence is working.
2. Are you testing independently of ConfigMgr first?
Using ConfigMgr to test your application's lifecycle is slow and has its own ways of failing that can mask problems with your application package. The goal, then, is to be able to test an application's installation, detection, and uninstallation separate from but equivalent to the ConfigMgr client. In order to achieve that goal you end up with three separate scripts for each application:
Install-Application.bat - the entry point for your installation script
Detect-Application.ps1 - the script that detects whether the application is install
Uninstall-Application.bat - the entry point for your uninstallation script
Each of these three scripts can be invoked directly by either you or the ConfigMgr client. For applications installed as system you need to use psexec -s to invoke scripts in the same context as ConfigMgr (caveat).
3. Are you aware of context?
Installers can behave rather differently depending on the context they are invoked in. You need to consider whether an application is installed for a user or the system. If it is installed for the system, when you test independently of ConfigMgr, use psexec -s to invoke your script.
4. Are you aware of user interaction?
An installer can also behave rather differently depending on whether a user can interact with it. To test a script as system with user interaction, use psexec -i -s.
5. Did you match ConfigMgr to the tested context and user interaction?
Once you have the full lifecycle working, make sure you select the correct corresponding options for context (installed for user vs. system) and interaction (user can interact with application, or not). If you don't do this, the ConfigMgr client will be installing the application different from the way you tested, so you really can't expect success.
6. Are you aware of the possibility of application detection context mismatch?
The context that detection scripts run in depends on whether the application is deployed to users or systems. This means that in some cases the installation and detection contexts won't matched. Keep this in mind when you write your detection scripts.
7. Have you structured your scripts so that exit codes work?
ConfigMgr needs to see exit codes from your installation and uninstallation scripts in order to do the right thing. Installers signal failure or the need to reboot using exit codes. In order for exit codes to get to the ConfigMgr client you need to ensure that your install and uninstall scripts are structured correctly.
for batch scripts, use exit /b %errorlevel% to pass the exit code of your executable out to the ConfigMgr client
for PowerShell scripts, this is the only way I have seen work reliably
8. Are you using PowerShell scripts for detection?
ConfigMgr has a nice user interface for checking things like the presence of files, registry keys, etc as a proxy for whether an application is installed. The problem with that scheme is that there is no way to test application detection separately from and equivalent to the ConfigMgr client. If you want to test the application lifecycle independent of the ConfigMgr client (trust me, you want that), all your detection must occur using PowerShell scripts.
9. Have you structured your PowerShell detection scripts correctly?
The rules ConfigMgr uses to interpret the output of a PowerShell detection script are arcane. Thankfully, they are documented.
We use TeamCity, nant and psexec to run a command on a remote machine as part of the release packaging. Everything works fine when I run the nant from the console but when running from teamcity psexec hangs (freezes) 50% of the times.
I looked through many forums and there seems to be workarounds that increase complexity of the call and involve loosing the output and the errorcode of the command.
Does anyone know an easier way to run a command on a remote machine?
I don't mind setting up some application on the remote machine, like a telnet server, any advices on what to do?
Thanks
I have solved this issue with a combination of RemCom and a custom MSBuild task called ExecParse.
RemCom, because it doesn't do odd things with STDOUT (thus hanging the build). We used, and ExecParse to capture the output of the remote task, and parse the Exit Code from the output, because the standard MSBuild Exec task does not capture output. Some NAnt equivalent that captures the output would work.
I've detailed this in a blog post: "Continuous Integration: Executing Remote Tasks with TeamCity, MSBuild, RemCom, and ExecParse"
PsExec does some funky things with the standard input/output, and invoking this from Java (which TeamCity is built on) raises all kinds of problems and stability issues. psexec -d did not work wither.
I solved it by using Powershell in Team City.
The script below stops an IIS 7 ApplicationPool on a remote server:
[string]$HostName = "myWebServer"
[string]$Cmd = "C:\Windows\System32\inetsrv\appcmd.exe stop apppool MyMainAppPool”
Invoke-WmiMethod -class Win32_process -name Create -ArgumentList ($Cmd) -ComputerName $HostName
More about it on my blog: http://blog.degree.no/2012/03/executing-commands-and-programs-on-a-remote-machine-using-powershell/
How about putting a (nant) time-out on the psexec and repeat the call until no time-out happens?
I use PSExec with the -d option (don't wait for it to finish) and capture the return code. The return code when you used -d is the process ID of the process running on the remote system. then I use PSList to poll the remote system for the process ID until I don't find it on the remote system any longer.
What happens if you setup TeamCity build agent on remote machine and let it perform the operation locally, passing it the binaries with "Artifact Dependencies"?