Running two process in one command-line with split-screen - command-line

I need to split the command prompt window without making two windows. On one side I need a file to be read and on the other side (split with lines) to be command prompt gathering CPU Usage and RAM Usages. Things I need to know if I can do:
- Gather CPU and RAM Usage from command prompt or batch
- Have two process running side by side in the same window
- Read configuration files in a specific format (such as: server-dir=C:\Users...)
- Have both sides update every second with no glitches in graphics
Could it be done? An example of this would be handle (a minecraft server handler, can be seen here: http://goo.gl/t3741n)

Related

VScode: Same terminal output over multiple columns

I generally run a program where the output is not very wide but I find myself constantly scrolling up to see previous output - it runs on a timer. I would like to use up more of my monitor by having the output flow over three columns instead of just one big one with a lot of wasted screen real estate.
Is there any setting or extension in vscode that allows this?

Having Powershell Autofill command line prompt

There is an old command line tool my company uses to deploy log files to various servers.... whoever wrote it made it very very repetitive.
There is a lot of prompting that happens and I want to automate this process. We have a long term goal of replacing this .exe file down the line but for now automation works for the short term..
Example
./logdeploy.exe
Enter the destination folder:
I would like the powershell script to just automatically enter the folder, since its literally the same folder. because this exe is going to ask for it at least 20 times throughout this process, so copy paste just gets anyoing.
Is this even possible to do?
If there really is no way around simulating interactive user input in order to automate your external program, a solution is possible under the following assumption:
Your external program reads interactive responses from stdin (the standard input stream).
While doing so is typical, it's conceivable that a given program's security-sensitive prompts such as for passwords deliberately accept input from the terminal only, as so to expressly prevent automating responses.
If the first assumption holds, the specific method that must be used to send the response strings via stdin depends on whether the external program clears the keyboard buffer before each prompt.
(a) If it does not, you can simply send all strings in a single operation.
(b) If it does, you need to insert delays between sending the individual strings, so as to ensure that input is only sent when the external program is actively prompting for input.
This approach is inherently brittle, because in the absence of being able to detect when the external program is read to read a prompt response, you have to guess how much time needs to elapse between sending responses - and that time may vary based on many runtime conditions.
It's best to use longer delays for better reliability, which, however, results in increased runtime overall.
Implementation of (a):
As zett42 and Mathias R. Jessen suggest, use the following to send strings C:\foo and somepass 20 times to your external program's stdin stream:
('C:\foo', 'somepass') * 20 | ./logdeploy.exe
Again, this assumes that ./logdeploy.exe buffers keyboard input it receives before it puts up the next prompt.
Implementation of (b):
Note: The following works in PowerShell (Core) 7+ only, because only there is command output being sent to an external program properly streamed (sent line by line, as it becomes available); unfortunately, Windows PowerShell collects all output first.
# PowerShell 7+ only
# Adjust the Start-Sleep intervals as needed.
1..20 | ForEach-Object {
Start-Sleep 1
'C:\foo'
Start-Sleep 2
'somepass'
} | ./logdeploy.exe

PowerShell monitoring external processes as they run

I have a script that runs an external process which creates some output (a file) and then I also capture console output to file (the log)
try
{
p4d -r $root -jc | out-file $output
}
I later check the log output, grab some info and the script carries on.
The problem is that the external process could (and has once) stalled and I need a way to check that on the fly to handle the error.
The best way I can think to do this is to monitor the file that the process creates for increasing size. Obviously this isn't without issue as it could potentially stall at any point and we don't know the resulting file size.
I will likely check the size of the last successful process and use that to set some limits.
My question is how do I achieve the whole check a process whilst it's running thing?

Running MATLAB system command in background with stdout

I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?

Run an executable without showing in "top"

I need to run an executable in background on a server, however, it takes some parameter that I do not want to expose to others. I wonder if there is any way that I can wrap this executable in another app, or preferably just by using MATLAB, that the actual executable will not be shown in top command?
I need to hide three things, 1) the parameter of, 2) the path to, 3) the CPU usage of the executable. For the CPU usage, I do not intend to trick the system to show a constant 0% percent, but I want to let the usage been shown in the wrapper app.
For example, I have an executable in /secret_path/A, which takes parameter -password 123, and consumes a constant 10% CPU usage, all these information will be very easy to be spotted if I type top in another terminal window. I want to create another executable, for example in ~/B, which hard code path and parameters of A, so I can just run B with no parameter to execute A, and instead showing a A record in top, it would be no trace of A and B will show 10% CPU usage in top.
Please suggest any way of doing that, without requiring root privilege, or why it is not possible.
You can run it in a virtual machine. That way not only the path can be hidden but the executable itself won't have to exist on the file system. If you run top you will see the VM using the CPU, which shouldn't be a problem for you since apparently you only want to hide the path of the program.