I'm working on a Neovim plugin that's basically rich integration for an external CLI program. Users of this plugin would typically already be regular manual CLI users of this program. When my plugin needs to shell out to the CLI, I'd like to have the option to have that process run inside an interactive shell session in the embedded terminal. This puts the invocations into shell history, so they can easily be modified manually or looked up later.
I know I can send input to the terminal via its PTY channel but is there any way to access its stdout and exit codes, e.g. by attaching callbacks? Especially since the process would be running inside the interactive shell, not directly as the command passed to :terminal, I don't see an easy way to do it. Can't just read the buffer, and TermClose autocommands won't work.
I.e. is there a way I can implement something like plenary.job but have that job execute inside a currently running shell in embedded terminal?
Related
In an extension, I'm trying to implement a TextEditorCommand in response to which the document in the active editor is processed by an external program. I managed to do so by registering a function with the command, within which a Task object is created and executed:
in function activate:
vscode.commands.registerTextEditorCommand('extension.command', process);
in function process(textEditor):
const document = textEditor.document;
const task = new vscode.Task(
{type: ''},
vscode.TaskScope.Workspace,
extensionDisplayName,
extensionName,
new vscode.ProcessExecution(executable, [fileName].concat(args)));
vscode.tasks.executeTask(task);
This works, except that executing a task leads all open documents to be saved, which is unnecessary and may be unwanted; I only need the active editor document to be saved, which I can do myself.
Therefore my question: Is it possible to run an external process in the integrated terminal without a Task?
I checked whether I can execute the ProcessExecution on its own, without passing it to a Task, but that doesn't seem possible.
I also tried to directly create a terminal, but vscode.window.createTerminal does not provide the option to specify an executable, just a shell – though I might simply pretend that my executable is a shell?
Moreover, I already failed at creating any kind of terminal; putting
````typescript
vscode.window.createTerminal(extensionDisplayName);
````
into the `activate` function seems to have no effect whatsoever.
Turns out the transpiler process wasn't running, so my code never made it into the running extension.
I could simply use node's child_process.spawn, but then the process would not be attached to VS Code's integrated terminal.
Update: Specifying an arbitrary executable as the "shell" running in the Terminal works.
However, it seems currently impossible to keep the terminal tab open after the process ended, so a shell or another wrapper is necessary.
But then it's impossible to get the exit status of the actual (wrapped) process, except maybe by having the wrapper write it to the terminal and parsing it's text content... nah, impossible too. Ugh
I have to think this is a solved issue but I am just not getting it to work. So I have come to you StackOverflow with this issue:
I have a windows server 2016 machine running in amazon ec2. I have a machine.ps1 script in a config directory.
I create an image of the box. (I have tried with checking noreboot and unchecking it)
When I create a new instance of the image I want it to run machine.ps1 at launch to set the computer name and then set routes and some config settings for the box. The goal is to do this without logging into the box.
I have read and tried:
Running Powershell scripts at Start up
and used this to ensure user data was getting passed in:
EC2 Powershell Launch Tools
I have tried setting up a scheduled task that runs the machine.ps1 on start up (It just hangs)
I see the initializeInstance.ps1 on start up task and have tried to even coop that replacing the line to run userdata with the line to run my script. Nothing.
If I log into the box and run machine.ps1, it will restart the computer and set the computer name and then I need to run it once more to set routes. This works manually. I just need to find a way to do it automagically.
I want to launch these instances from powershell not with launch configurations and auto scale.
You can use User data
Whenever you deploy a new server, workstation or virtual machine there is nearly always a requirement to make final changes to the system before it’s ready for use. Typically this is normally done with a post-deployment script that might be triggered manually on start-up or it might be a final step in a Configuration Manager task sequence or if you using Azure you may use the Custom Script Extension. So how do you achieve similar functionality using EC2 instances in Amazon Web Services (AWS)? If you’ve created your own Amazon Machine Image (AMI) you can set the script to run from the Runonce registry key, but then can be a cumbersome approach particularly if you want to make changes to the script and it’s been embedded into the image. AWS offers a much more dynamic method of injecting a script to run upon start-up through a feature called user data.
Please refer following link for ther same:
Poershell User data
Windows typically won't let a powershell script call another powershell script unless it is being run as Administrator. It is a weird 'safety' feature. But it is perfectly okay to load the ps1 files and use any functions inside them.
The UserData script is typically run as "system". You would THINK that would pass muster. But it fails...
The SOLUTION: Make ALL of your scripts into powershell functions instead.
In your machine.ps1 - wrap the contents with function syntax
function MyDescriptiveName { <original script contents> }
Then in UserData - use the functions like this
# To use a relative path
Set-Location -Path <my location>
# Load script file into process memory
. <full-or-relpath>/machine.ps1
# Call function
MyDescriptiveName <params-if-applicable>
If the function needs to call other functions (aka scripts), you'll need to make those scripts into functions and load the script file into process memory in UserData also.
I want to be able to create a task for qvw files with a command (cmd, powershell, etc), just as you would through the QlikView Management Console. We would like to be able to automate some of the task creation remotely which would require this functionality.
I know that there are arguments that can be passed through the qv.exe to reload the document, but I want to actually create a task through a command line. Is this possible?
Don't think that this is possible out of the box. You can control QV server through QlikView Management Services API. But for this reason you need to build a .net command line app that will do whatever you want.
If you are interested follow this link for more info.
I've got a batch file dmx2vlc which will play a random video file through VLC-Player when called.
It works well locally but I need this to happen on another machine on the network (will be adhoc) and the result (VLC-Player playing the video) must be visible on the remote screen.
I've tried SSH, Powershell and PsExec, but both seem to run the batch file and the player in the session of the command line, even when applying a patch to allow multiple logins.
So IF I get to run the batch file it is never visible on screen.
Using Teamviewer and the like is no option as I need to be able to call all this programmatically from my dmx program.
I'm not bound to being able to call the batch directly, it would be sufficient for me if I could somehow trigger it to run.
Sadly latency is a problem here as we are talking about a lighting (thus dmx) environment.
Any hints would be greatly appreciated!
You can use PSexec if the remote system is XP with the interactive parameter if you state the session to interact with, 0 would probably be the console (person physically in front of the machine).
This has issues with Windows Vista and newer as it pops up a prompt to ask the user to change their display mode first.
From memory, you could create a scheduled task on the remote system pretty easily though and as long as it's interactive the user should see it.
Good luck.
Try using web interface. It is rather easy: VLC is running http server, and accessing particular URL from remote machine will give full control over VLC. Documentation can be found here
I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now)
Telnet 192.168.1.100 77557
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
hello<br />
You typed: "hello"<br />
NOW:
I want to create lot of commands that each take some args and have error codes.
Anyone has done this before?
It would be great if I can have the server upon initialization go through each directory
and execute the init.py file and in turn, the init.py file of each command call
into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs.
At least this is how I would do it in C/C++.
But I want the best Pythonic way of doing this.
/cmd/
/cmd/myreboot/
/cmd/myreboot/ini.py (note underscore don't show for some reason)
/cmd/mylist/
/cmd/mylist/init.py
... etc
IN: /cmd/myreboot/__ini__.py:
from myMainCommand import RegisterMe
RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla")
So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout.
Thx
I would build this app using combination of cmd2 and RPyC modules.
Twisted's web server does something kinda-sorta like what you're looking to do. The general approach used is to have a loadable python file define an object of a specific name in the loaded module's global namespace. Upon loading the module, the server checks for this object, makes sure that it derives from the proper type (and hence has the needed interface) then uses it to handle the requested URL. In your case, the same approach would probably work pretty well.
Upon seeing a command name, import the module on the fly (check the built-in import function's documentation for how to do this), look for an instance of "command", and then use it to parse your argument list, do the processing, and return the result code.
There likely wouldn't be much need to pre-process the directory on startup though you certainly could do this if you prefer it to on-the-fly loading.