Powersell Ram usage for exact process - powershell

Good day,
There are a lot of the same themes, but i wonder how i can get Ram usage for the exact process, for instance, notepad++ (need to search by name). The output shouldn't be in the table view, just MB and %
Thank you for your help

Related

NetLogo BehaviorSpace memory size constraint

In my model I'm using behaviour space to carry out a number of runs, with variables changing for each run and the output being stored in a *.csv for later analysis. The model runs fine for the first few iterations, but quickly slows as the data grows. My questions is will file-flush when used in behaviour space help this? Or is there a way around it?
Cheers
Simon
Make sure you are using table format output and spreadsheet format is disabled. At http://ccl.northwestern.edu/netlogo/docs/behaviorspace.html we read:
Note however that spreadsheet data is not written to the results file until the experiment finishes. Since spreadsheet data is stored in memory until the experiment is done, very large experiments could run out of memory. So you should disable spreadsheet output unless you really want it.
Note also:
doing runs in parallel will multiply the experiment's memory requirements accordingly. You may need to increase NetLogo's memory ceiling (see this FAQ entry).
where the linked FAQ entry is http://ccl.northwestern.edu/netlogo/docs/faq.html#howbig
Using file-flush will not help. It flushes any buffered data to disk, but only for a file you opened yourself with file-open, and anyway, the buffer associated with a file is fixed-size, not something that grows over time. file-flush is really only useful if you're reading from the same file from another process during a run.

Which is more efficient, storing output in variable or output to file?

I am using Robocopy in PowerShell to sort through and output millions of filenames older than a user-specified age. My question is this: Is it better to make use of Robocopy's logging feature, then import the log via Get-Content -ReadCount, or would it be better to store Robocopy's output in a variable so that the script doesn't have to write to disk?
I would have to regex either way to get the actual file names. I'm using Robocopy because many of the files have paths longer than 248 chars.
Is one way more preferred than the other? Don't want to miss something that should be considered obvious.
You can skip all the theory and speculation about the multiple factors in play by measuring how long each method takes using Measure-Command, for example:
Measure-Command {$rc_output = robocopy <arguments>}
Measure-Command {robocopy <arguments> /log:rc.log; Get-Content rc.log [...]}
You'll get output telling you exactly how long each version took, down to the millisecond. Try it out on a small amount of sample data, see which one is quicker, then apply it to your millions of files.
I will add to #mjolinor's comment, and the other comments. To answer the question directly:
Saving information to a variable (and therefore to RAM) is always faster than direct to disk. But only in the following situations:
Variables are designed to be used to store small (<10Mb) amounts of data. They are not designed to hold things like entire databases. If the size of the data is large (i.e. millions of rows of data, i.e. tens of megabytes), then disk is always better. The problem is that if you shove a ton of information into a variable, you will fill up your RAM, and once your RAM is full, things slow down, paging memory to disk starts happening, and basically everything stops working, including any commands that you currently running (i.e. Robocopy).
Overall, because you are dealing with millions of rows, my recommendation is to write it to disk, because your results are likely to take up quite a bit of space, much more than a variable "should" hold.
Now, after saying all that and delving into the details of how programs manipulate bits in memory, it all doesn't really matter, because the time spent writing things to disk is very very small compared to the amount of time that it takes to process all the files.
If you are processing 1,000,000 files, and you process them at a good speed, say, 1,000 files a second, then it will take 1,000 seconds to process. That means that it takes over 16 Minutes to run through all the files.
If lets say writing to disk is bad, and causes you to be able to process 5 files slower per second, so 995 files instead, it will run only 5 seconds longer. 5 seconds is an impact of 0.5% which is nothing compared to the amount of time it takes to run the whole process.
It is much more likely that writing to a variable will cause much more troubles than writing to disk.
It depends on how much output you're talking about, and what your available system resources are. It will be faster to write them out to a file and then read them back in if the disk I/O time is less than the additional overhead required for memory managment to get into memory. You can try it both ways and time it, but I'd try reading it into memory first while monitoring it with Task Manager. If it starts throwing lots of page faults, that's a clue that you may be better off using the disk as intermediate storage.

system hardware related

hi every body there
first of all thanks for your support and help .Now i want two know how to calculate the load on a computer system. i heard about load sensor and other utility which mostly provide the option of finding the temperature of hard disk or system , like top, system monitor but i don't want to use them.
all i want to know simply load on CPU like CPU is 40% free, load on memory means total usages of memory or amount of free memory ,free disk space etc.
i don't need soft ware tool for finding these thing .
all i want to know is there any program in c or in other language or script which can find out these thing one by one or simultaneously?
are there any commands to find this?
or can anybody explain how system monitor work?
waiting for your help
Depends on your operating system. On Unix/Linux, there is the directory /proc which contains a lot of files/directories with system information such as /proc/loadvg or /proc/cpuinfo
These aren't real files on your disk, but virtual files containing system information. But you can just open them and read them with standard file functions, so it works with any programming language, from C to Python.

How to quickly get directory (and contents) size in cygwin perl

I have a perl script which monitors several windows network share drive usages. It currently monitors the free space of several network drives using the cygwin df command. I'd like to add the individual drive usages as well. When I use the
du -s | grep total
command, it takes for ever. I need to look at the shared drive usages because there are several network drives that are shared from a single drive on the server. Thus, filling one network drive fills them all (yes I know, not the best solution, not my choice).
So, I'd like to know if there is a quicker way to get the folder usage that doesn't take for ever.
du -s works by recursively querying the size of every directory and file. If your filesystem implementation doesn't store this total value somewhere, this is the only way to determine disk usage. Therefore, you should investigate which filesystem and drivers you are using, and see if there is a way to directly query for this data. Otherwise, you're probably SOL and will have to suck up the time it takes to run du.
1) The problem possibly lies in the fact that they are network drives - local du is acceptably fast in most cases. Are you doing du on the exact server where the disk is housed? If not, try to approach the problem from a different angle - run an agent on every server hosting the drives which calculates the local du summaries and then report the totals to a central process (either IPC or heck, by writing a report into a file on that same share filesystem).
2) If one of the drives is taking a significantly larger share of space (on average) than the rest of them, you can optimize by doing du on all but the "biggest" one and then calculate the biggest one by subtracting the sum of others from df results
3) Also, to be perfectly honest, it sounds like a suboptimal solution from design standpoint - while you indicated that it's not your choice, I'd strongly recommend that you post a question on how you can improve the design within the parameters you were given (to ServerFault website, not SO)

tfs database size - version control

I have TFS installed on a single server and am running out of space on the disk. (We've been using the instance for about 2 years now.)
Looking at the tables in SQL Server what seems to be culprit is the tbl_content table, it is at 70 GB. If I do a get on the entire source tree for all projects it is only about 8 GB of data.
Is this just all the histories of the files? It seems like a 10:1 ratio just the histories...since I would think the deltas would be very small.
Does anyone know if that is a reasonable size given 8 GB of source (and 2 yrs of activity)? And if not what to look at to 'fix' this?
Thanks
I can't help with the ratio question at the moment, sorry. For a short-term fix you might check to see if there is any space within the DB files that can be freed up. You may have already, but if not..
SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;
If the statement above returns some space you want to recover you can look into a one time DBCC SHRINKDATABASE or DBCC SHRINKFILE along with scheduling routine SQL maintenance plan that may include defragmenting the database.
DBCC SHRINKDATABASE and DBCC SHRINKFILE aren't things you should do on a regular basis, because SQL Server needs some "swap" space to move things around for optimal performance. So neither should be relied upon as your long term fix, and both could cause some noticeable performance degradation of TFS response times.
JB
Are you seeing data growth every day, even when no activity occurs on the system? If the answer is yes, are you storing any binaries outside of the 8GB of source somewhere?
The reason that I ask is that if TFS is unable to calculate a delta or if the file exceeds the size of delta generation, TFS will duplicate the entire binary file. I don't have the link with me, but I have it on my work machine, which describes this scenario and how to fix it, in the event that this is the cause of your problems.