In an extension, I'm trying to implement a TextEditorCommand in response to which the document in the active editor is processed by an external program. I managed to do so by registering a function with the command, within which a Task object is created and executed:
in function activate:
vscode.commands.registerTextEditorCommand('extension.command', process);
in function process(textEditor):
const document = textEditor.document;
const task = new vscode.Task(
{type: ''},
vscode.TaskScope.Workspace,
extensionDisplayName,
extensionName,
new vscode.ProcessExecution(executable, [fileName].concat(args)));
vscode.tasks.executeTask(task);
This works, except that executing a task leads all open documents to be saved, which is unnecessary and may be unwanted; I only need the active editor document to be saved, which I can do myself.
Therefore my question: Is it possible to run an external process in the integrated terminal without a Task?
I checked whether I can execute the ProcessExecution on its own, without passing it to a Task, but that doesn't seem possible.
I also tried to directly create a terminal, but vscode.window.createTerminal does not provide the option to specify an executable, just a shell – though I might simply pretend that my executable is a shell?
Moreover, I already failed at creating any kind of terminal; putting
````typescript
vscode.window.createTerminal(extensionDisplayName);
````
into the `activate` function seems to have no effect whatsoever.
Turns out the transpiler process wasn't running, so my code never made it into the running extension.
I could simply use node's child_process.spawn, but then the process would not be attached to VS Code's integrated terminal.
Update: Specifying an arbitrary executable as the "shell" running in the Terminal works.
However, it seems currently impossible to keep the terminal tab open after the process ended, so a shell or another wrapper is necessary.
But then it's impossible to get the exit status of the actual (wrapped) process, except maybe by having the wrapper write it to the terminal and parsing it's text content... nah, impossible too. Ugh
Related
I'm working on a Neovim plugin that's basically rich integration for an external CLI program. Users of this plugin would typically already be regular manual CLI users of this program. When my plugin needs to shell out to the CLI, I'd like to have the option to have that process run inside an interactive shell session in the embedded terminal. This puts the invocations into shell history, so they can easily be modified manually or looked up later.
I know I can send input to the terminal via its PTY channel but is there any way to access its stdout and exit codes, e.g. by attaching callbacks? Especially since the process would be running inside the interactive shell, not directly as the command passed to :terminal, I don't see an easy way to do it. Can't just read the buffer, and TermClose autocommands won't work.
I.e. is there a way I can implement something like plenary.job but have that job execute inside a currently running shell in embedded terminal?
The problem occurs when one of the long running command is forcefully killed by process.kill
After that whenever I try to execute a command it raise Command 'extension.commandName' not found error.
Note: All the commands are properly registered in package.json under contribution>>command. Commands are also included as part of activationEvents. Keybinding also in place for the registered command. I have also checked similar issues like this but that did not cover my scenario.
The way I am handling this right now is by exposing another command that fires workbench.action.reloadWindow. Once the window is reloaded the extension able to handle request again.
I have to think this is a solved issue but I am just not getting it to work. So I have come to you StackOverflow with this issue:
I have a windows server 2016 machine running in amazon ec2. I have a machine.ps1 script in a config directory.
I create an image of the box. (I have tried with checking noreboot and unchecking it)
When I create a new instance of the image I want it to run machine.ps1 at launch to set the computer name and then set routes and some config settings for the box. The goal is to do this without logging into the box.
I have read and tried:
Running Powershell scripts at Start up
and used this to ensure user data was getting passed in:
EC2 Powershell Launch Tools
I have tried setting up a scheduled task that runs the machine.ps1 on start up (It just hangs)
I see the initializeInstance.ps1 on start up task and have tried to even coop that replacing the line to run userdata with the line to run my script. Nothing.
If I log into the box and run machine.ps1, it will restart the computer and set the computer name and then I need to run it once more to set routes. This works manually. I just need to find a way to do it automagically.
I want to launch these instances from powershell not with launch configurations and auto scale.
You can use User data
Whenever you deploy a new server, workstation or virtual machine there is nearly always a requirement to make final changes to the system before it’s ready for use. Typically this is normally done with a post-deployment script that might be triggered manually on start-up or it might be a final step in a Configuration Manager task sequence or if you using Azure you may use the Custom Script Extension. So how do you achieve similar functionality using EC2 instances in Amazon Web Services (AWS)? If you’ve created your own Amazon Machine Image (AMI) you can set the script to run from the Runonce registry key, but then can be a cumbersome approach particularly if you want to make changes to the script and it’s been embedded into the image. AWS offers a much more dynamic method of injecting a script to run upon start-up through a feature called user data.
Please refer following link for ther same:
Poershell User data
Windows typically won't let a powershell script call another powershell script unless it is being run as Administrator. It is a weird 'safety' feature. But it is perfectly okay to load the ps1 files and use any functions inside them.
The UserData script is typically run as "system". You would THINK that would pass muster. But it fails...
The SOLUTION: Make ALL of your scripts into powershell functions instead.
In your machine.ps1 - wrap the contents with function syntax
function MyDescriptiveName { <original script contents> }
Then in UserData - use the functions like this
# To use a relative path
Set-Location -Path <my location>
# Load script file into process memory
. <full-or-relpath>/machine.ps1
# Call function
MyDescriptiveName <params-if-applicable>
If the function needs to call other functions (aka scripts), you'll need to make those scripts into functions and load the script file into process memory in UserData also.
Howdy, I am trying to run matlab remotely on windows via OpenSSH installed with Cygwin, but launching matlab in windows without the GUI seems to be impossible.
If i am logged in locally, I can launch matlab -nodesktop -nodisplay -r script, and matlab will launch up a stripped down GUI and do the command.
However, this is impossible to do remotely via ssh, as, matlab needs to display the GUI.
Does anyone have any suggestions or work arounds?
Thanks,
Bob
Short story: is your script calling exit()? Are you using "-wait"?
Long story: I think you're fundamentally out of luck if you want to interact with it, but this should work if you just want to batch jobs up. Matlab on Windows is a GUI application, not a console application, and won't interact with character-only remote connectivity. But you can still launch the process. Matlab will actually display the GUI - it will just be in a desktop session on the remote computer that you have no access to. But if you can get it to do your job without further input, this can be made to work, for some value of "work".
Your "-r script" switch is the right direction. But realize that on Windows, Matlab's "-r" behavior is to finish the script and then go back to the GUI, waiting for further input. You need to explicitly include an "exit()" call to get your job to finish, and add try/catches to make sure that exit() gets reached. Also, you should use a "-logfile" switch to capture a copy of all the command window output to a log file so you can see what it's doing (since you can't see the GUI) and have a record of prior runs.
Also, matlab.exe is asynchronous by default. Your ssh call will launch Matlab and return right away unless you add the "-wait" switch. Check the processes on the machine you're sshing to; Matlab may actually be running. Add -wait if you want it to block until finished.
One way to do this stuff just use -r to call to a standard job wrapper script that initializes your libraries and paths, runs a job, and does cleanup and exit. You'll also want to make a .bat wrapper that sets up the -logfile switch to point to a file with the job name, timestamp, and other info in it. Something like this at the M-code level.
function run_batch_job(jobname)
try
init_my_matlab_library(); % By calling classpath(), javaclasspath(), etc
feval(jobname); % assumes jobname is an M-file on the path
catch err
warning('Error occurred while running job %s: %s', jobname, err.message)
end
try
exit();
catch err
% Yes, exit() can throw errors
java.lang.System.exit(1); % Scuttle the process hard to make sure job finishes
end
% If your code makes it to here, your job will hang
I've set up batch job systems using this style in Windows Scheduler, Tidal, and TWS before. I think it should work the same way under ssh or other remote access.
A Matlab batch system on Windows like this is brittle and hard to manage. Matlab on Windows is fundamentally not built to be a headless batch execution system; assumptions about an interactive GUI are pervasive in it and hard to work around. Low-level errors or license errors will pop up modal dialog boxes and hang your job. The Matlab startup sequence seems to have race conditions. You can't set the exit status of MATLAB.exe. There's no way of getting at the Matlab GUI to debug errors the job throws. The log file may be buffered and you lose output near hangs and crashes. And so on.
Seriously consider porting to Linux. Matlab is much more suitable as a batch system there.
If you have the money or spare licenses, you could also use the Matlab Distributed Computing toolbox and server to run code on remote worker nodes. This can work for parallelization or for remote batch jobs.
There are two undocumented hacks that reportedly fix a similar problem - they are not guarantied to solve your particular problem but they are worth a try. Both of them depend on modifying the java.opts file:
-Dsun.java2d.pmoffscreen=false
Setting this option fixes a problem of extreme GUI slowness when launching Matlab on a remote Linux/Solaris computer.
-Djava.compiler=NONE
This option disables the Java just-in-time compiler (JITC). Note that it has no effect on the Matlab interpreter JITC. It has a similar effect to running Matlab with the '–nojvm' command-line option. Note that this prevents many of Matlab's GUI capabilities. Unfortunately, in some cases there is no alternative. For example, when running on a remote console or when running pre-2007 Matlab releases on Intel-based Macs. In such cases, using the undocumented '-noawt' command-line option, which enables the JVM yet prevents JAVA GUI, is a suggested compromise.
Using putty use ssh -X remote "matlab" it should work
I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now)
Telnet 192.168.1.100 77557
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
hello<br />
You typed: "hello"<br />
NOW:
I want to create lot of commands that each take some args and have error codes.
Anyone has done this before?
It would be great if I can have the server upon initialization go through each directory
and execute the init.py file and in turn, the init.py file of each command call
into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs.
At least this is how I would do it in C/C++.
But I want the best Pythonic way of doing this.
/cmd/
/cmd/myreboot/
/cmd/myreboot/ini.py (note underscore don't show for some reason)
/cmd/mylist/
/cmd/mylist/init.py
... etc
IN: /cmd/myreboot/__ini__.py:
from myMainCommand import RegisterMe
RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla")
So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout.
Thx
I would build this app using combination of cmd2 and RPyC modules.
Twisted's web server does something kinda-sorta like what you're looking to do. The general approach used is to have a loadable python file define an object of a specific name in the loaded module's global namespace. Upon loading the module, the server checks for this object, makes sure that it derives from the proper type (and hence has the needed interface) then uses it to handle the requested URL. In your case, the same approach would probably work pretty well.
Upon seeing a command name, import the module on the fly (check the built-in import function's documentation for how to do this), look for an instance of "command", and then use it to parse your argument list, do the processing, and return the result code.
There likely wouldn't be much need to pre-process the directory on startup though you certainly could do this if you prefer it to on-the-fly loading.