Guys is there any command to find heap memory in Solaris.I need to write a script which fetches it's details and sends mail for every five minutes.
Solaris has a command called "pmap"
Usage : pmap <PID>
Usage : pmap -x <PID>
Usage : pmap -ax <PID>
You can read man pmap for the details.
Jdklocation/bin/jmap -heap <ProcessID>
Related
In the past I've employed inotify for logging and as well as system functions. Now I'm in a position where I need to know each time an executable has been called, and a complete set of command line arguments passed in.
Short of setting up an auditd rule, is there any method to trigger on a particular executable being called, and return its command line arguments from user-space? I know the audit daemon can do this, so perhaps that's where I should look.
Monitoring process creation and termination events is a useful skill to have in you toolbox. This article consists of two parts. The first introduces exiting tools for diffrent platforms. The second explains how these tools work internally. 1
1 describes many tools, one tool named forkstat which uses the netlink2 and source code
Here are commands I used:
git clone https://github.com/ColinIanKing/forkstat.git
cd forkstat
make
sudo ./forkstat
In a separate ssh session I ran an ls command and observed this output:
Time Event PID Info Duration Process
09:43:49 fork 10362 parent -bash
09:43:49 fork 10433 child -bash
09:43:49 exec 10433 ls --color=auto
09:43:49 exit 10433 0 0.004s ls --color=auto
I've recently began using HandBrake to process some videos I downloaded to make them lighter. I built a small python GUI program to automate the processing, making use of the CLI version. What I am doing is generating the command according to the video and executing it with os.system. Something like this:
import os
def process(args):
#some algorithm to generate cmd using args
cmd = "handbrakecli -i raw_video.mp4 -o video.mp4 -O -e x264" #example command
os.system(cmd)
os.remove("raw_video.mp4")
The code works perfectly, but the problem is the overuse of my CPU. Usually, this takes 100% of CPU usage during considerable amount of time. I use the program CoreTemp to keep track of my processor temperature and, usually, it hits 78 °C.
I tried using BES (Battle Encoder Shirase) by saving the cmd command into a batch file called exec.bat and doing os.system("BES_1.7.7\BES.exe -J -m exec.exe 20"), but this simply does nothing.
Speed isn't important at all. Even if it takes longer, I just want to use less of my CPU, something around 50% would be great. Any idea on how I could do so?
In Handbrake you can pass advanced parameters so you only use a certain amount of CPU threads.
You can use threads, view the Handbrake CLI Documentation
When using threads you can specify any number of CPU processors to use. The default is auto.
The -x parameter stands for Advanced settings in the GUI of Handbrake, that is where threads will go.
The below tells Handbrake to only use one CPU thread for the Advanced setting:
-x threads=1
You can also use the veryslow for the --encoder-preset setting to help the CPU load.
--encoder-preset=veryslow
I actually prefer using the --encoder-preset=veryslow preset since I see an overall better quality in the encode.
And both together:
--encoder-preset=veryslow -x threads=1
So formatted with your cmd variable:
cmd = "handbrakecli -i raw_video.mp4 -o video.mp4 -O -e x264 --encoder-preset=veryslow -x threads=1" #example command
See if that helps.
One easy way in Linux is to use taskset. You can use the terminal or make a custom shortcut/command.
For example, my CPU has 8 threads but I only want to use 6 for Handbrake.
Just start the program with taskset -c 2,3,4,5,6,7 handbrake, this way the threads 0 and 1 will be free to another task/process and the program will run on threads 2,3,4,5,6,7.
In Windows you can change the Target of the shortcut or use on cmd:
C:\Windows\System32\cmd.exe /C start "" /affinity FC "C:\Program Files\HandBrake\HandBrake.exe""
As far as I understand It reads the value backwards for each four bits, it means the first four bits in Hexadecimal are for threads 7-4 (1111) and the second four bits in Hexadecimal are for threads 3-0 (1100). In my case I have a 8 threads CPU and leaving free theads 1 and 0 (see image below).
i want to a cmd windows command to display the all the processes and the cpu percentage for each process.
is there a command which give me this result?
can you help me please?
thank you
Try pslist from the SysInternals-powered pstools.
You will need to download them from that link and put the tools in your cmd directory (or chdir to wherever they are).
Use -s to see the CPU usage of each process.
Perfmon can use a wildcard to get the CPU usage for each running process. It also has the text interface typeperf which spits the results out to the console.
This command will produce a one-line CSV output of the current running process CPU usage:
typeperf "\process(*)\% processor time" -sc 1
The PID is missing from this report. If you need, you can add the PID for each of the processes as a separate counter to log, then match up the names:
typeperf "\process(*)\% processor time" "\process(*)\id process" -sc 1
Q: How do I find the available PBS queues on the "typical" Torque MPI system?
(asking our admin takes 24+ hours, and the system changes with constant migration)
(for example, "Std8" is one possible queue)
#PBS -q Std8
The admin finally got back. To get a list of queues on our hpc system, the command is:
$ qstat -q
qstat -f -Q
shows available queues and details about the limits (cputime, walltime, associated nodes etc.)
How about simply "pbsnodes" - that should probably tell you more than you care to know. Or I suppose "qstat -Q".
Run
qhost -q
to see the node-queue mapping.
Another option:
qmgr -c 'p q'
The p and q are for print queues.
I have a command line perl script that I want to stress test. Basically what I want to do is to run multiple instances of the same script in parallel so that I can figure out at what point our machine becomes unresponsive.
Currently I am doing something like this:
$ prog > output1.txt 2>err1.txt & \
prog > output2.txt 2>err2.txt &
.
.
.
.
and then I am checking ps to see which instances finished and which didn't. Is there any open-source application available that can automated this process? Preferably with a web-interface?
You can use xargs to run commands in parallel:
seq 1 100 | xargs -n 1 -P 0 -I{} sh -c 'prog > output{}.txt 2>err{}.txt'
This will run 100 instances in parallel.
For a better testing framework (including parallel testing via 'spawn') take a look at Expect.
Why not use the crontab or Scheduled Tasks to automatically run the script?
You could write something to automatically parse the output easily.
With GNU Parallel this will run one prog per CPU core:
seq 1 1000 | parallel prog \> output{}.txt 2\>err{}.txt
If you wan to run 10 progs per CPU core do:
seq 1 1000 | parallel -j1000% prog \> output{}.txt 2\>err{}.txt
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ