Executing a commandline from JConsole - command-line

I've recently discovered the joy of going through JConsole.exe instead of J.exe to run various scripts. There's generally a noticeable performance gain.
However, sometimes I need to use wd winexec (calling ad-hoc programs for example) and in the console, 11!:0 (wd) support is not available.
Is there a way to send a command from JConsole.exe to the regular Windows command line interpreter? Or maybe a workaround?

You might try the task script. See the script itself for documentation.
J6: ~system/packages/misc/task.ijs',
J7: ~system/main/task.ijs
It contains utilities such as fork_jtask_, spawn_jtask_, shell_jtask_
You can load the script in both versions using: require 'task'

Related

PS1 uninstallation script in SCCM

I'm a nub scripter and am trying to write a really simple script to taskkill 2 programs and then uninstall 1 of them.
I wrote it in Powershell and stuck it in SCCM for deployment...however every time I deploy it, it's not running the last line to uninstall the program.
Here's the code:
# Closing Outlook instance
#
taskkill /IM outlook.exe /F
#
# Closing Linkpoint instance
#
taskkill /IM LinkPointAssist.exe /F
#
# Uninstalling Linkpoint via uninstall string if in Program Files
#
MsiExec.exe /X {DECDCD14-DEF6-49ED-9440-CC5E562FDC41} /qn
#
# Uninstalling Linkpoint via WmiObject if installed manually in AppData
Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
#
Exit
Can someone help? SCCM says the script completes with no error and I know it's able to execute it since the taskkills work...but it's not uninstalling the program.
Thanks in advance for any input.
So, SCCM is running this script, and nothing in the script is going to throw an error.
If you want to throw an error which SCCM can return to know how the deployment went, you need to add an extra step.
$result = Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
if ($result.ReturnValue -ne 0){
[System.Environment]::Exit(1603)
}else
{
[System.Environment]::Exit(0)
}
I see a lot of these kinds of questions come through on SO and SF: Someone struggling with unexpected behavior of an application, script, or ConfigMgr and very little information about the assumptions I can make about their environment. At that stage, it would typically be days of interaction to narrow the problem to a point where a solution is possible.
I'm hoping this answer can serve as a reference for future such questions. The first question to OP should be "Which of these 9 principles are you violating?" You could think of it as a sort of Joel Test for ConfigMgr application packaging.
Nine Steps to Better ConfigMgr Application Packages
I have found that installing and uninstalling applications reliably using ConfigMgr requires carefully sticking to a bunch of principles. I learned these principles the hard way. If you're struggling to figure out why an application is not working right under ConfigMgr, odds are that you will answer "no" to one of the following questions.
1. Are you testing the entire lifecycle?
In order to have any hope of reliably managing an application you need to test the entire lifecycle of an application. This is the sequence I test:
Detect: make sure the detection script result is negative
Install: install the application using your installation script
Detect: make sure the detection script result is positive when run
Uninstall: uninstall using your uninstallation script
I run this sequence repeatedly making tweaks to each step until the whole sequence is working.
2. Are you testing independently of ConfigMgr first?
Using ConfigMgr to test your application's lifecycle is slow and has its own ways of failing that can mask problems with your application package. The goal, then, is to be able to test an application's installation, detection, and uninstallation separate from but equivalent to the ConfigMgr client. In order to achieve that goal you end up with three separate scripts for each application:
Install-Application.bat - the entry point for your installation script
Detect-Application.ps1 - the script that detects whether the application is install
Uninstall-Application.bat - the entry point for your uninstallation script
Each of these three scripts can be invoked directly by either you or the ConfigMgr client. For applications installed as system you need to use psexec -s to invoke scripts in the same context as ConfigMgr (caveat).
3. Are you aware of context?
Installers can behave rather differently depending on the context they are invoked in. You need to consider whether an application is installed for a user or the system. If it is installed for the system, when you test independently of ConfigMgr, use psexec -s to invoke your script.
4. Are you aware of user interaction?
An installer can also behave rather differently depending on whether a user can interact with it. To test a script as system with user interaction, use psexec -i -s.
5. Did you match ConfigMgr to the tested context and user interaction?
Once you have the full lifecycle working, make sure you select the correct corresponding options for context (installed for user vs. system) and interaction (user can interact with application, or not). If you don't do this, the ConfigMgr client will be installing the application different from the way you tested, so you really can't expect success.
6. Are you aware of the possibility of application detection context mismatch?
The context that detection scripts run in depends on whether the application is deployed to users or systems. This means that in some cases the installation and detection contexts won't matched. Keep this in mind when you write your detection scripts.
7. Have you structured your scripts so that exit codes work?
ConfigMgr needs to see exit codes from your installation and uninstallation scripts in order to do the right thing. Installers signal failure or the need to reboot using exit codes. In order for exit codes to get to the ConfigMgr client you need to ensure that your install and uninstall scripts are structured correctly.
for batch scripts, use exit /b %errorlevel% to pass the exit code of your executable out to the ConfigMgr client
for PowerShell scripts, this is the only way I have seen work reliably
8. Are you using PowerShell scripts for detection?
ConfigMgr has a nice user interface for checking things like the presence of files, registry keys, etc as a proxy for whether an application is installed. The problem with that scheme is that there is no way to test application detection separately from and equivalent to the ConfigMgr client. If you want to test the application lifecycle independent of the ConfigMgr client (trust me, you want that), all your detection must occur using PowerShell scripts.
9. Have you structured your PowerShell detection scripts correctly?
The rules ConfigMgr uses to interpret the output of a PowerShell detection script are arcane. Thankfully, they are documented.

Execute batch file using dos()

I got a problem when executing batch file commands through matlab. This batch file includes commands to run simulations in Adams. When I execute the batch file directly from DOS window, it works well. But if I use matlab to execute it (using command dos()), it gives error saying 'cannot check out the license for Adams'.
This confuses me: if the license is incorrect, it should not work no matter I execute the batch file directly in DOS or ask MATLAB to execute it. I also tried to execute other DOS commands through matlab using dos() and it worked well.
Does anyone know what the problem may be?
Such issues are commonly caused by some environment variables being changed or cleared by MATLAB. I have very similar experience on Linux and Mac OS X, where this causes havoc when using system or unix.
In Unix-like systems, MATLAB is started from a shell script where all of this happens. So you can either incorporate missing variables there or in the .matlab7rc.sh in your home directory (the latter is preserved when you upgrade MATLAB and it is much easier to use). I won't go into all the Unix details here.
An alternative workaround is to explicitly set those variables when you issue a system command (e.g. system('export variable=value ; ...')). It is quite a bit of work, but you can then use that MATLAB code on different computers with ease.
On Windows, I'm not completely sure about the exact location of the corresponding files (and whether MATLAB starts in quite a similar way as on Unix). But if they exist, you can probably find it in the MATLAB documentation.
Anyhow, the alternative fix should work here as well.
First you need to diagnose which system variables you need (likely a PATH or anything that has a name related to Adams).
To do so in Windows, run set from the Windows command prompt (cmd.exe) and from within MATLAB. Whatever differs in the output is a possible suspect for your problem.
To inspect just a single variable, you can use the command echo %variablename%.
I will assume that you have found that the suspect environment variable is missing and should be set to value.
The workaround fix is then to run your command in MATLAB as system('set suspect=value & ...') where you replace ... with your original command.

Using Perl modules vs. using system() calls

Quite recently, I wrote a few scripts in Perl for a cPanel plugin in which, though most of the code was in Perl, there was quite a lot of system() commands as well which I used to execute shell commands directly.
I am pretty sure that there are Perl modules that I could have used instead. Keeping in mind the time crunch, I thought using the system command was easier (to complete the project in time). In retrospective, I think that was a bad programming practice.
My question is, is there any tradeoff, memory-wise or otherwise when using Perl's modules and using system() commands. For example, what would be the difference in using:
my $directory = "temp";
mkdir $directory;
and
system ("mkdir temp");
Also, if I am to use Perl modules, wouldn't that involve installing a whole lot of modules in the beginning?
The most obvious economy is that, in the first case, your Perl process is creating the directory, while in the second, Perl is starting a new process that runs a command shell which parses the command line and runs the shell mkdir command to create the directory, and then the child process is deleted. You would be creating and deleting a process and running the shell for every call to system: there is no caching of processes or similar economy.
The second thing that comes to mind is that, if your original mkdir fails, it is simple to handle the error in Perl, whereas shelling out to run a mkdir command puts your program at a distance from the error, and it is far more awkward to handle the many different problems that may arise.
There is also the question of maintainability and portability, which will affect you even if you aren't expecting to run your program on more than one machine. Once you abandon control to a system command you have no control over what happens. I could have written a mkdir that will delete your home directory or, less disastrously, your program may find itself on a system where mkdir doesn't exist, or does something slightly different.
In the particular case of mkdir, this is a built-in Perl operator and is part of every Perl installation. There are also many core libraries that require you to put use Module in your program, but are already installed and need no further action.
I am sure others will come up with more reasons to prefer a Perl operator or module over a shell command. In general you should prefer to keep everything you can within the language. There are only a few cases where you have to run a third-party program, and they usually involve custom software that allows you act on proprietary data formats.

Powershell for scripting large analysis runs

I'm completely new to Powershell and I know that a number of people use it to automate tasks much in the way bash and c-shell programming is done in *NIX. I've successfully recompiled some ancient analysis software written in FORTRAN that takes individual input files. I now need to somehow run just under 1000 cases with only slightly varied input files. The analysis software writes intermediate files, so for concurrent runs, every run has to be within a different directory. Each case can take up to 40 minutes to solve, so individually running these will take a lot of time and be prone to error.
So now for the question, can Powershell automate this and is there some similar script out there that I can modify to do it?
The automation would need to do the following (as I see it):
Take in an input file with the various runs that have to be run
Create a subdirectory relative to the run name/number
Save a version of the input files with the variables switched in the subdirectory
Run the analysis software in the subdirectory
Look at standard/error output of analysis software to confirm it was successful
Append to a file success or failure of a run
Ideally would be able to run up to some number of analyses concurrently (4-6 for my machine)
If IT reboots the machine (as they do whenever they choose), I'd like to be able to restart where it left off, though I expect the loss of anything that the analysis software was running during the forced reboot.
I've tried recompiling the software with vectorization and automated parallelization and on the tested cases, the convergence time was only minimally reduced, so it is safe to assume that this is effectively single threaded.
Powershell has lots of familiar aliases for Unix users. ls, cat, cp etc are implemented as aliases to native Powershell commands. The commands are not case sensitive. What's more, you can search help even with alias' name. That is,
man ls <=> get-help get-childitem
apropos <=> get-help <keyword>
get-help loop
about_Break
about_Continue
about_do
about_For
about_Foreach
about_Language_Keywords
...
This should help converting an existing script. For the rest, I'll give some hints as the description is somewhat vague.
Get-Content is used to read file contents into a variable: $myVar = cat c:\some\file.txt.
Directory creation is just md.
Capturing exe output is done by assigning to a variable: $exeOutput = c:\myApp.exe
Adding stuff to a file is Add-Content.
Background jobs are started with Start-Job.

How to obtain stateful ssh shell session programmatically?

I am working on an application that needs to send commands to remote servers. Sending commands is easy enough with the plethora of SSH client libraries.
However, I would like shell state (i.e. current working directory, environment variables, etc) preserved between each command. All client libraries that I have seen do not do this. For example, the following is an example of code that does not do what I want:
use Net::SSH::Perl;
my $server = Net::SSH::Perl->new($host);
$server->login($user, $pass);
$server->cmd('cd /var');
$server->cmd('pwd'); # I _would like_ this to output /var
There will be other tasks performed between sending commands, so combining the commands like $server->cmd('cd /var; pwd') is not acceptable.
Net::SSH::Expect does what you want, though the "Expect" way is not completely reliable as it will be parsing the output of your commands and trying to detect when the shell prompt appears again.
I'm not sure what you are doing exactly, but you could just start one SSH session. If you really can't do this, maybe you can just use absolute paths for everything.