I'm trying to write a PowerShell program that "records" a process CPU and RAM usage.
After searching for ways to do it I found about the command Get-Counter.
Now it works perfectly fine for RAM but I just can't understand the values I'm getting from the CPU.
For example when I tested my program I check a process that used about 10% CPU(according to Task Manager), but when checking with Get-Counter I get a value around 90.
Now I know Get-Counter takes all the logical processors into account.
But I have 16 logical processors, so I just can't see where the 90 is coming from.
If someone knows either how to make sense of the value I'm getting, or if there is another way to record CPU usage I will be thankful.
without seeing (at least) part of the code it will be hard to tell why you are getting different value (might be wrong counter, wrong value [raw, second, cooked], or maybe even some calculations being performed within your script). Either way my guess would be that you are querying raw value rather than "cookedvalue"
Related
I am writing a memory mapped character device. I can read and write correctly to the device, but my question is about the write behavior in the following case
When the count of data to write is much more than the available memory.
What would be the proper behavior in this case? Shall I write as much as I can and return the error in the next write? or fail from the beginning since the data is much more than the device capacity?
And to make the question more specific, let's take a FS on a hard-disk (ext3) for example.. what will happen if I tried to write data that is more than the available space on the hard-disk? will it fail before it start? or write as much data it can and fail in the next write?
This pretty much depends upon your application. Can your application live with writing partial data ? Is partial data any good ?
IMO, you should do a available memory check before writing anything and return with an error message if you don't have enough memory since otherwise you won't be able to do any meaningful error recovery (if at all you are taking care of it).
I am using Robocopy in PowerShell to sort through and output millions of filenames older than a user-specified age. My question is this: Is it better to make use of Robocopy's logging feature, then import the log via Get-Content -ReadCount, or would it be better to store Robocopy's output in a variable so that the script doesn't have to write to disk?
I would have to regex either way to get the actual file names. I'm using Robocopy because many of the files have paths longer than 248 chars.
Is one way more preferred than the other? Don't want to miss something that should be considered obvious.
You can skip all the theory and speculation about the multiple factors in play by measuring how long each method takes using Measure-Command, for example:
Measure-Command {$rc_output = robocopy <arguments>}
Measure-Command {robocopy <arguments> /log:rc.log; Get-Content rc.log [...]}
You'll get output telling you exactly how long each version took, down to the millisecond. Try it out on a small amount of sample data, see which one is quicker, then apply it to your millions of files.
I will add to #mjolinor's comment, and the other comments. To answer the question directly:
Saving information to a variable (and therefore to RAM) is always faster than direct to disk. But only in the following situations:
Variables are designed to be used to store small (<10Mb) amounts of data. They are not designed to hold things like entire databases. If the size of the data is large (i.e. millions of rows of data, i.e. tens of megabytes), then disk is always better. The problem is that if you shove a ton of information into a variable, you will fill up your RAM, and once your RAM is full, things slow down, paging memory to disk starts happening, and basically everything stops working, including any commands that you currently running (i.e. Robocopy).
Overall, because you are dealing with millions of rows, my recommendation is to write it to disk, because your results are likely to take up quite a bit of space, much more than a variable "should" hold.
Now, after saying all that and delving into the details of how programs manipulate bits in memory, it all doesn't really matter, because the time spent writing things to disk is very very small compared to the amount of time that it takes to process all the files.
If you are processing 1,000,000 files, and you process them at a good speed, say, 1,000 files a second, then it will take 1,000 seconds to process. That means that it takes over 16 Minutes to run through all the files.
If lets say writing to disk is bad, and causes you to be able to process 5 files slower per second, so 995 files instead, it will run only 5 seconds longer. 5 seconds is an impact of 0.5% which is nothing compared to the amount of time it takes to run the whole process.
It is much more likely that writing to a variable will cause much more troubles than writing to disk.
It depends on how much output you're talking about, and what your available system resources are. It will be faster to write them out to a file and then read them back in if the disk I/O time is less than the additional overhead required for memory managment to get into memory. You can try it both ways and time it, but I'd try reading it into memory first while monitoring it with Task Manager. If it starts throwing lots of page faults, that's a clue that you may be better off using the disk as intermediate storage.
I have a command line application, which executed on shell will list the output reading from the database. And it gets this information in chunks for which memory allocation and free is being done.
When I execute the command (Whose output will span around 6000 pages) it is listing the data correctly.
But (only in AIX) when I issue the 'command | more' after displaying random number of pages, memory allocation in the application that is getting the data in chunks is failing.
(Where as the same command implementation with more is working fine in linux for the same data).
Any idea why in AIX it is failing? Anybody know about the memory allocation criteria in AIX? why piping the output to more command causes memory allocation failure in application?
It is not clear exactly what the failure is. Are you getting a seg fault or is the call the malloc returning 0 indicating that you are out of memory?
The fault could be in an AIX library but it could just as easily be within your application.
Go here: http://pic.dhe.ibm.com/infocenter/aix/v6r1/index.jsp (or the page that is appropriate for your level)
Search for "malloc debug". These facilities are not bleeding edge but they are fairly good and complete. With some time and care you can track down memory leaks and using memory after it has been freed (which sounds like the case here).
Its also good to review the available APARs for your level looking for matches that sound similar.
There are also 3rd party tools like zero fault http://www.zerofault.com/index.html and Purify (which looks like IBM purchased) http://www-01.ibm.com/software/awdtools/purify/unix/sysreq/ to help out.
Good luck
I have the following setup:
Mac Pro with 2 GB of RAM (yes, not that much)
MongoDB 1.1.3 64-bit
8 million entries in a single collection
index for one field (integer) wanted
Calling .ensureIndex(...) takes more than an hour, actually I killed the process after that. My impression is, that it takes far too long. Also, I terminated the process but the index can be seen with .getIndexes() afterwards.
Anybody knows what is going wrong here?
Adding an index on an existing data set is expected to take a while, as the entire BTree needs to be constructed. If you think this is taking an unreasonable amount of time, or you've seen a regression in performance the best bet is to ask about it on the list.
I would just like to point out the command:
db.currentOp()
which prints the current operations running on the server, and also shows the indexing process.
The foreground indexing is done in 3 steps, and the background one in 2 steps (if I remember correctly), but the background one is alot slower. The foreground one on the other hand locks the collection while indexing it (ie not very useful on a running application server).
As said before, google BTree if you are interested in how they work.
Anybody knows what is going wrong here?
Are you running via ssh or connecting remotely in some way? Sound a bit like a broken pipe issue. Did you create the index with {background : true} or not?
I am wondering how to get a process run at the command line to use less processing power. The problem I'm having is the the process is basically taking over the CPU and taking MySQL and the rest of the server with it. Everything is becoming very slow.
I have used nice before but haven't had much luck with it. If it is the answer, how would you use it?
I have also thought of putting in sleep commands, but it'll still be using up memory so it's not the best option.
Is there another solution?
It doesn't matter to me how long it runs for, within reason.
If it makes a difference, the script is a PHP script, but I'm running it at the command line as it already takes 30+ minutes to run.
Edit: the process is a migration script, so I really don't want to spend too much time optimizing it as it only needs to be run for testing purposes and once to go live. Just for testing, it keeps bring the server to pretty much a halt...and it's a shared server.
The best you can really do without modifying the program is to change the nice value to the maximum value using nice or renice. Your best bet is probably to profile the program to find out where it is spending most of its time/using most of its memory and try to find a more efficient algorithm for what you are trying to do. For example, if your are operating on a large result set from MySQL you may want to process records one at a time instead of loading the entire result set into memory or perhaps you can optimize your queries or the processing being performed on the results.
You should use nice with 19 "niceness" this makes the process very unlikely to run if there are other processes waiting for the cpu.
nice -n 19 <command>
Be sure that the program does not have busy waits and also check the I/O wait time.
Which process is actually taking up the CPU? PHP or MySQL? If it's MySQL, 'nice' won't help at all (since the server is not 'nice'd up).
If it's MySQL in general you have to look at your queries and MySQL tuning as to why those queries are slamming the server.
Slamming your MySQL server process can show as "the whole system being slow" if your primary view of the system through MySQL.
You should also consider whether the cmd line process is IO intensive. That can be adjusted on some linux distros using the 'ionice' command, though it's usage is not nearly as simplistic as the cpu 'nice' command.
Basic usage:
ionice -n7 cmd
will run 'cmd' using 'best effort' scheduler at the lowest priority. See the man page for more usage details.
Using CPU cycles alone shouldn't take over the rest of the system. You can show this by doing:
while true; do done
This is an infinite loop and will use as much of the CPU cycles it can get (stop it with ^C). You can use top to verify that it is doing its job. I am quite sure that this won't significantly affect the overall performance of your system to the point where MySQL dies.
However, if your PHP script is allocating a lot of memory, that certainly can make a difference. Linux has a tendency to go around killing processes when the system starts to run out of memory.
I would narrow down the problem and be sure of the cause, before looking for a solution.
You could mount your server's interesting directory/filesystem/whatever on another machine via NFS and run the script there (I know, this means avoiding the problem and is not really practical :| ).