I thought Task Manager used the WMI performance counters to calculate things like disk usage per process. However, running a PowerShell script that queries Win32_PerfFormattedData_PerfProc_Process, I get the right process and some data filled in--but the relevant values I'm interested in are all zero. (See image.)
What I want to do is simply get (or calculate) the kind of data that Task Manager shows, and then save the data for graphing and/or decision purposes. (e.g. For learning purposes, killing off a program that is using 100% CPU or 16 GB of RAM.)
Running in Windows 10 (Pro). I just tried running as administrator and no difference. Powershell version is 5.1.
Q: So why are these values zero? And, if Task Manager is getting these values... what API is it using if not WMI?
Related
I'm trying to write a PowerShell program that "records" a process CPU and RAM usage.
After searching for ways to do it I found about the command Get-Counter.
Now it works perfectly fine for RAM but I just can't understand the values I'm getting from the CPU.
For example when I tested my program I check a process that used about 10% CPU(according to Task Manager), but when checking with Get-Counter I get a value around 90.
Now I know Get-Counter takes all the logical processors into account.
But I have 16 logical processors, so I just can't see where the 90 is coming from.
If someone knows either how to make sense of the value I'm getting, or if there is another way to record CPU usage I will be thankful.
without seeing (at least) part of the code it will be hard to tell why you are getting different value (might be wrong counter, wrong value [raw, second, cooked], or maybe even some calculations being performed within your script). Either way my guess would be that you are querying raw value rather than "cookedvalue"
Im using Windows10, i need to check CPU usage and memory usage for a power shell script which is scheduled to run every 3mins.
I have created a data collector set with following details
I use perfmon, to monitor CPU Usage i have added:
\process(Powershell_12345)%ProcessorTime
to monitor memory usage i have added:
\Memory%Committed bytes in Use
\Memory%Committed bytes
But problem is every time powershell script gets triggered through scheduler a new PID is created and the process has names concatenated with PID like powershell_
If i add the powershell process only till that thread is used, it would get monitored and not for entire day
How do i use perfmon to monitor powershell.exe for a day ?
This should probably be on SF rather than SO but the easiest way to do this is going to just be to monitor All process instances and then go back and grab the powershell instance when you look through the data later. Other option would be to programmatically create the data collector set as the first step in your script, more info on that Here A lot more work but you'll end up with cleaner data so it just depends what's important to you.
A final option if you'd like to avoid using perfmon at all is to take a look at This Function I created for my blog about a year ago, in all of my testing it's results correlate to those seen in perfmon quite well, if your interested in going that way let me know and I can show you how to set it up as a job and have it keep running in a loop while your script runs to generate the data you are looking for.
IBM V6.1
When using the I system navigator and when you click System values the following display.
By default the Do not allow parallel processing is selected.
What will the impact be on processing in programs when you choose multiple processes, we have allot of rpgiv programs and sql queries being executed and I think it will increase performance?
Basically I want to turn this on in production environment but not sure if I will break anything by doing this for example input or output of different programs running parallel or data getting out of sequence?
I did do some research :
https://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/rzakz/rzakzqqrydegree.htm
And understand each option but I do not know the risk of changing it from default to multiple.
First off, in order get the most out of *MAX and *OPTIMIZE, you'd need a system with more than one core (enabled for IBM i / DB2) along with the DB2 Symmetric Multiprocessing (SMP) (57xx-SS1 option 26) license program installed; thus allowing the system to use SMP for queries and index builds.
For *IO, the system can use multiple tasks via simultaneous multithreading (SMT) even on a single core POWER 5 or higher box. SMT is enabled via the Processor multi tasking (QPRCMLTTSK) system value
You're unlikely to "break" anything by changing the value. As long as your applications don't make bad assumptions about result set ordering. For example, CPYxxxIMPF makes use of SQL behind the scenes; with anything but *NONE you might end up with the rows in your DB2 table in different order from the rows in the import file.
You will most certainly increase the CPU usage. This is not a bad thing; unless you're currently pushing 90% + CPU usage regularly. If you're only using 50% of your CPU, it's probably a good thing to make use of SMT/SMP to provide better response time even if it increases the CPU utilization to 60%.
Having said that, here's a story of it being a problem... http://archive.midrange.com/midrange-l/200304/msg01338.html
Note that in the above case, the OP was pre-building work tables at sign on in order to minimize the wait when it was time to use them. Great idea 20 years ago with single threaded systems. Today, the alternative would be to take advantage of SMP/SMT and build only what's needed when needed.
As you note in a comment, this kind of change is difficult to test in non-production environments since workloads in DEV & TEST are different. So it's important to collect good performance data before & after the change. You might also consider moving it stages *NONE --> *IO --> *OPTIMIZE and then *MAX if you wish. I'd spend at least a month at each level, if you have periodic month end jobs.
We have 4 x NetApp filers, each with around 50x VOLs. We've been experience performance issues & tracked it down to how fragmented the data is. We've run some measures (all coming back over 7+) and have been gradually manually running the WAFL reallocates (starting with our VMStores) which is improving the fragmentation level to around 3/4.
As ever - time is short and was wondering if anyone had a script which could handle this process? Preferably Powershell or VBScript.
(we have the DataOnTap CMDlets installed & enabled)
I know you can schedule scans but you cant seem to tell the filer to only run one at a time.
I'd ideally like a script which would:
+Pull a csv of volumes
+Measure each volume sequentially, only starting the next measure when the previous has completed, recording the scoring
+Then Reallocate each volume sequentially, only starting the next Reallocate when the previous has completed, recording the new scoring
For your reference:
https://library.netapp.com/ecmdocs/ECMP1196890/html/man1/na_reallocate.1.html
Any help / guidance in this matter would be very much appreciated!
Are you using 7-mode or cDOT?
Anyway, I only know Powershell. The script shouldn't be long and it would go as something like this:
connect the netapp (using connect-nacontroller / connect-nccontroller)
getting all the volumes (using get-navol / get-ncvol)
get the measurement for each volume (either using foreach or perhaps the command could be run once and give the information for all the volumes)
export the output to csv (using export-csv)
a foreach loop iterating on all the volumes:
- if volume is fragmented behind a given threshold
- run the reallocation (I do not know which command needs to be used)
If you want this thing to run eternally just put it all under a while loop, if you are going to schedule this you should rerun the checks to get a new csv with the new measurements.
Disclaimer:
I am not familiar with the reallocation process nor with it's powershell command behaviors. The post should give you pretty much the things to be done but I was only using common sense.
Perhaps the command for reallocation only start the reallocation process and let it run at the background - resulting in all of the reallocation to run simultaneously. If so, a while loop is needed inside the if statement using another command to report the status until it is completed.
You should try to run this on a single volume and then a list including few volumes in order to make sure it runs the way you want it to.
I have a command line application, which executed on shell will list the output reading from the database. And it gets this information in chunks for which memory allocation and free is being done.
When I execute the command (Whose output will span around 6000 pages) it is listing the data correctly.
But (only in AIX) when I issue the 'command | more' after displaying random number of pages, memory allocation in the application that is getting the data in chunks is failing.
(Where as the same command implementation with more is working fine in linux for the same data).
Any idea why in AIX it is failing? Anybody know about the memory allocation criteria in AIX? why piping the output to more command causes memory allocation failure in application?
It is not clear exactly what the failure is. Are you getting a seg fault or is the call the malloc returning 0 indicating that you are out of memory?
The fault could be in an AIX library but it could just as easily be within your application.
Go here: http://pic.dhe.ibm.com/infocenter/aix/v6r1/index.jsp (or the page that is appropriate for your level)
Search for "malloc debug". These facilities are not bleeding edge but they are fairly good and complete. With some time and care you can track down memory leaks and using memory after it has been freed (which sounds like the case here).
Its also good to review the available APARs for your level looking for matches that sound similar.
There are also 3rd party tools like zero fault http://www.zerofault.com/index.html and Purify (which looks like IBM purchased) http://www-01.ibm.com/software/awdtools/purify/unix/sysreq/ to help out.
Good luck