I'm trying to set up some performance monitors. I also want to do some stuff with the data (csv), including analyzing the data with some PS scripting upon collection segmentation. Here is my PS command to create the logman entry:
logman create counter -n NetLog -f csv -si 00:00:30 `
-cnf 00:01:00 -c "\Network Interface(*)\Bytes Total/sec" -r -v mmddhhmm `
-b 00:00:00 -e 23:59:59 -rc C:\PerfLogs\Admin\NetLogConfig\hello.cmd
Note that the details like segment length and sample interval are only that low for testing purposes. Production will be much different, though undecided as of yet, but I digress. Now, this works great:
logman create counter -n NetLog -f csv -si 00:00:30 `
-cnf 00:01:00 -c "\Network Interface(*)\Bytes Total/sec" -r -v mmddhhmm `
-b 00:00:00 -e 23:59:59
But for some reason, as soon as I add -rc C:\PerfLogs\Admin\NetLogConfig\hello.cmd, the counter stops upon segmentation of the collection period instead of segmenting and continuing. Note that the command to create the counter succeeds, and the counter will start successfully, but the collector set is halted when the file is closed for segmentation. It also does not run the command at all. I have also tried a file type of .bat instead of .cmd, and I have also tried typing a command directly into the -rc parameter (eg -rc echo "Hello World!"). .bat makes no difference, and enetering a command directly will get me a nice error message about it not being an acceptable paramater. Inside the file is a place-holder command that right now goes:
echo "Hello World!"
pause
So how do I get a command to run upon segmentation/file close? I will consider work-arounds, but this seems by far the cleanest solution.
Read newest logman create counter reference:
-[-]rc <task> Run the command specified each time the log is closed.
Note that -rc switch parameter is -rc <task> (in an older Technet document is -rc FileName). So what <task> stands for? Read Data Collector Set Properties and/or run perfmon.exe, see image below:
Task - You can run a Windows Management Instrumentation (WMI) task upon completion
of the Data Collector Set collection by entering the command in the
Run this task when the data collector set stops box.
Refer to WMI task documentation for options.
And finally, from WMI task documentation I have recognized that <task> in -rc <task> should be a name of a scheduled task. Next modification of your attempt might give a proof (a new instance of cmd window flashes every minute and output files are filled as expected):
erase d:\bat\SO\38859079.txt
erase C:\PerfLogs\Admin\NetLog*.csv
logman delete NetLog
logman create counter -n NetLog -f csv -si 00:00:15 -cnf 00:01:00 ^
-rf 00:05:00 -c "\Network Interface(*)\Bytes Total/sec" -r -v mmddhhmm ^
-b 00:00:00 -e 23:59:59 -rc 38859079
logman start NetLog
timeout /T 360 /Nobreak
logman stop NetLog
dir /B /S C:\PerfLogs\Admin\NetLog*.csv
type d:\bat\SO\38859079.txt
Schtasks /Query /FO LIST /V /TN 38859079 | findstr /I /C:"Task To" /C:"Type"
Output:
==> D:\bat\SO\38859079.bat
==> erase d:\bat\SO\38859079.txt
==> erase C:\PerfLogs\Admin\NetLog*.csv
==> logman delete NetLog
The command completed successfully.
==> logman create counter -n NetLog -f csv -si 00:00:15 -cnf 00:01:00 -rf 00:05:00 -c "\
Network Interface(*)\Bytes Total/sec" -r -v mmddhhmm -b 00:00:00 -e 23:59:59 -rc 388590
79
The command completed successfully.
==> logman start NetLog
The command completed successfully.
==> timeout /T 360 /Nobreak
Waiting for 0 seconds, press CTRL+C to quit ...
==> logman stop NetLog
Error:
Data Collector Set is not running.
==> dir /B /S C:\PerfLogs\Admin\NetLog*.csv
C:\PerfLogs\Admin\NetLog_08101250.csv
C:\PerfLogs\Admin\NetLog_08101251.csv
C:\PerfLogs\Admin\NetLog_08101252.csv
C:\PerfLogs\Admin\NetLog_08101253.csv
C:\PerfLogs\Admin\NetLog_08101254.csv
==> type d:\bat\SO\38859079.txt
10.08.2016 12:51:47,99
10.08.2016 12:52:49,04
10.08.2016 12:53:50,06
10.08.2016 12:54:51,07
10.08.2016 12:55:48,00
==> Schtasks /Query /FO LIST /V /TN 38859079 | findstr /I /C:"Task To" /C:"Type"
Task To Run: cmd /c >>d:\bat\SO\38859079.txt echo %date% %time%
Schedule Type: On demand only
==>
Please note than your question has nothing to do with powershell (IMHO wrong tag); in my example is scheduled task to run cmd however it should work for powershell as well.
Related
I'm on a Ubuntu 18.04 server. I know the full command line information can be grabbed by ps auxww. For example, by running ps auxww, I know the command /usr/local/bin/my-program -parameter :8888 is running. How can I get the same info from PowerShell? I searched around and all the info is about how to get the command line info on Windows.
On Ubuntu 18.04 PowerShell,
did you try ps -a -F. It should give the details you are looking for.
if need specific details about the running/all processes try with more options with
ps --help all or ps --help output commands
PS> ps --help output
Usage:
ps [options]
Basic options:
-A, -e all processes
-a all with tty, except session leaders
a all with tty, including other users
-d all except session leaders
-N, --deselect negate selection
r only running processes
T all processes on this terminal
x processes without controlling ttys
Output formats:
-F extra full
-f full-format, including command lines
f, --forest ascii art process tree
-H show process hierarchy
-j jobs format
j BSD job control format
-l long format
l BSD long format
-M, Z add security data (for SELinux)
-O <format> preloaded with default columns
O <format> as -O, with BSD personality
-o, o, --format <format>
user-defined format
s signal format
u user-oriented format
v virtual memory format
X register format
-y do not show flags, show rss vs. addr (used with -l)
--context display security context (for SELinux)
--headers repeat header lines, one per page
--no-headers do not print header at all
--cols, --columns, --width <num>
set screen width
--rows, --lines <num>
set screen height
--help <simple|list|output|threads|misc|all>
display help and exit
For more details see ps(1).
In my script I need to work with the exit status of the non-last command of a pipeline:
do_real_work 2>&1 | tee real_work.log
To my surprise, $? contains the exit code of the tee. Indeed, the following command:
false 2>&1 | tee /dev/null ; echo $?
outputs 0. Surprise, because the csh's (almost) equivalent
false |& tee /dev/null ; echo $status
prints 1.
How do I get the exit code of the non-last command of the most recent pipeline?
Bash has set -o pipefail which uses the first non-zero exit code (if any) as the exit code of a pipeline.
POSIX shell doesn't have such a feature AFAIK. You could work around that with a different approach:
tail -F -n0 real_work.log &
do_real_work > real_work.log 2>&1
kill $!
That is, start following the as yet non-existing file before running the command, and kill the process after running the command.
I am trying to create a batch file to run start some micro services and database
1 FOR /F "tokens=4 delims= " %%P IN ('netstat -a -n -o ^| findstr :1000') DO #ECHO TaskKill.exe /PID %%P
2 FOR /F "tokens=4 delims= " %%P IN ('netstat -a -n -o ^| findstr :1001') DO #ECHO TaskKill.exe /PID %%P
3 FOR /F "tokens=4 delims= " %%P IN ('netstat -a -n -o ^| findstr :5432') DO #ECHO TaskKill.exe /PID %%P
4 start cd "C:\Program Files\PostgreSQL\10\bin\" & pg_ctl.exe -D "c:\Program Files\PostgreSQL\10\data" start
#REM to start service
5 start javaw -jar -Dserver.port=1000 text-annotation-tool-1.0-SNAPSHOT.jar
Line 1 to 3 and Line 5 execute correctly when executed by commenting line 4.
Line 4 is to start a Postgres server in a new prompt (beacuse of the Dir change). I think the problem is with the way I have used quotes. The 'start' at the beginning and ending of line 4 serve different purpose.
Also, if I execute line 4 in different prompt, How do I close the prompt after execution (nohup equivalent)
There are two errors: You can't "pass" cd to the start command. And start has the quirk to interpret the first quoted parameter as the new window's title. So start "C:\Program Files\PostgreSQL\10\bin\" ... wouldn't work, you need to supply a dummy window title. The & also seems wrong
So you need:
start "Postgres Server" "C:\Program Files\PostgreSQL\10\bin\pg_ctl.exe" -D "c:\Program Files\PostgreSQL\10\data" start
As the full path to pg_ctl.exe is supplied, there is no need for a cd.
But if you want to define the default directory for the new process, you have to use the /D parameter:
start "Postgres Server" /D "C:\Program Files\PostgreSQL\10\bin" pg_ctl.exe -D "c:\Program Files\PostgreSQL\10\data" start
Unrelated, but: putting the Postgres data directory into c:\Program Files\ is a very bad idea. That directory has special permissions for a purpose. You should use %ProgramData% or %AppData% instead
I've submitted my job by the following command:
bsub -e error.log -o output.log ./myScript.sh
I have a question: why are the output and errors logs available only once the job ended?
Thanks
LSF doesn't steam the output back to the submission host. If the submission host and the execution host have a shared file system, and the JOB_SPOOL_DIR is in that shared file system (the spool directory is $HOME/.lsbatch by default) then you should see the stdout and stderr there. After the job finishes, the files there are copied back to the location specified by bsub.
Check bparams -a | grep JOB_SPOOL_DIR to see if the admin has changed the location of the spool dir. With or without the -o/-e options, while the job is running its stdout/err will be captured in the job's spool directory. When the job is finished, the stdout/stderr is copied to the filenames specified by bsub -o/-e. The location of the files in the spool dir is $JOB_SPOOL_DIR/<jobsubmittime>.<jobid>.out or $JOB_SPOOL_DIR/<jobsubmittime>.<jobid>.err
[user1#beta ~]$ cat log.sh
LINE=1
while :
do
echo "line $LINE"
LINE=$((LINE+1))
sleep 1
done
[user1#beta ~]$ bsub -o output.log -e error.log ./log.sh
Job <930> is submitted to default queue <normal>.
[user1#beta ~]$ tail -f .lsbatch/*.930.out
line 1
line 2
line 3
...
According to the LSF documentation the behaviour is configurable:
If LSB_STDOUT_DIRECT is not set and you use the bsub -o option, the standard output of a job is written to a temporary file and copied to the file you specify after the job finishes.
I am using Psexec to run a remote batch file. I pass input to psexec and redirect it to the remote batch file which seeks a filename as its input. However while redirecting, the file name becomes a garbage as ###&#* which means actual file name is not passed to batch file which the user gives. can anyone tell what might be the reason for this.
pause
cd c:
set /P INPUT=Type input: %=%
echo Your input was: %INPUT%
copy %INPUT% \\remotemachineip\C$ && c:\psexec \\machineip cmd /k "c:\batchfile.bat arg1 < %INPUT% & del %INPUT%" -e -c -f -i
pause
pause
cd c:
set /P INPUT=Type input: %=%
echo Your input was: %INPUT%
copy %INPUT% \\remotemachineip\C$ && c:\psexec \\machineip cmd /k c:\batchfile.bat %INPUT% & del %INPUT% -c -f -i
pause
the remote batch file which seeks input from the above batch file commands on the local machine. so %1(below command) is replaced by the %INPUT%(the second argument in the cmd.exe in the above code content) which the user enters and the sqlcmd command will be executed. so the input which the user passes in the above batch file will be successfully redirected to the below batch file(content) and the command(sqlcmd below) in it will be successfully executed.
SQLCMD -Sservername -d(databasename) -iC:LINKEDSERVER.sql -v filename="%1"
for e.g if I give %INPUT% as c:\inputfile.xls it will be redirected to SQLCMD command in place of %1, so it executes it as--
SQLCMD -Sservername -d(databasename) -iC:LINKEDSERVER.sql -v filename="c:\inputfile.xls"