I use DumpIt tool to dump RAM to raw file for forensic job. But it have a problem that, sometime I just want to dump specific process like cmd or notepad but not dump all current process.
my Dumpit program I used
Use Sysinternals ProcDump. It can take dumps manually or by condition (like high CPU usage or unhandled exception).
Also you can take a dump using Process manager (but need to be careful and use 32-bit process manager if your target process is 32 bit as well)
Related
This is a common error message and there are many general answers that have not worked for me.
I think I have isolated this particular problem to the PostgreSQL data directory being symlinked to an external hard drive.
FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=56, 03600).
HINT: This error does *not* mean that you have run out of disk space. It occurs either if all available shared memory IDs have been taken, in which case you need to raise the SHMMNI parameter in your kernel, or because the system's overall limit for shared memory has been reached.
$ sysctl -a | grep sysv
kern.sysv.shmmax: 412316860416
kern.sysv.shmmin: 8
kern.sysv.shmmni: 64
kern.sysv.shmseg: 128
kern.sysv.shmall: 100663296
$ sudo cat /etc/sysctl.conf
kern.sysv.shmmax=412316860416
kern.sysv.shmmin=8
kern.sysv.shmmni=64
kern.sysv.shmseg=128
kern.sysv.shmall=100663296
PostgreSQL version 9.4.15. From my PostgreSQL config
shared_buffers = 128MB
Don't know what other settings would be relevant.
Other environment details:
The external hard drive with the data directory is at only 50% capacity. My RAM usage when this happens is ~60% capacity.
I have not been able to determine an exact set of steps that reproduces the bug. I have an external hard drive with a PostgreSQL data directory and a local folder with another data directory. In my project, I'll symlink to one or the other depending on which copy of data I want to use. As far as I have noticed, the problem only appears when I've been working off the symlinked hard drive and when I unplug it without stopping the server and then plug it back in. But it doesn't happen every time when I perform those steps.
I don't expect anyone to be able to point to the specific problem given the above description.
But how can I get more useful information next time I'm in a bugged state? Are there any system commands that would help identify the exact problem?
...It occurs either if all available shared memory IDs have been taken, in which case you need to raise the SHMMNI parameter in your kernel, or because the system's overall limit for shared memory has been reached.
How can I check if if all available shared memory IDs have been taken or if the system's overall limit for shared memory has been reached and what do I do with the answer?
How can I know the consumption of RAM, processor and disk that MongoDB takes when I'm doing find queries, insert queries, update queries, bulk queries, etc.
I though about MongoPerf but it only shows me disk usage, although is awesome cause can create threads, choose an amount of gb, and read or write. But I need to know how much RAM it takes too, and processor
It could be like doing htop for MongoDB
You could use the ps(1) command (I guess you are on Linux).
Programmatically, you could (on Linux) use the /proc/ file system (which is used by ps, top, htop). For details, read proc(5).
To get the pid of your MongoDb process, you could use pidof(1) or pgrep(1). If the pid of mongod server is 1234, you should be interested by /proc/1234/status.
Notice that (on Linux) a process does not directly consume RAM. The (mongod server) process has a virtual address space, and the kernel manages the RAM (and dispatches it among-st processes). You could be interested by the resident set size (and you can query it with ps or via /proc/)
The virtual address space of process of pid 1234 can be queried via /proc/1234/status and /proc/1234/maps (see also pmap(1)).
If you are not familiar with /proc/ play first with it on the command line, for your shell, by running cat /proc/$$/status and cat /proc/$$/maps and exploring /proc/$$/.
On my machine, sudo cat /proc/$(pidof mongod)/status gives some interesting output.
I installed MongoDb on a windows server 2008 R2 and the hotfix KB2731284 is not installed, but I cannot restart the server easily.
In the hotfix description, I got this message "You run an application that uses the FlushViewOfFile() function to clean up memory-mapped files from the paged memory pool." (see https://support.microsoft.com/en-us/kb/2731284)
My question is, when the funtion FlushViewOfFile() is called? My application is just writing in a collection and get data from it. Do I risk to get some wrong behaviors?
I think you can run MongoDb without applying the Hotfix, but I would not recommend it. In long time you may run into problems. They have included some fixes in MongoDB to workaround the problem.
A detailed description of the problem can be found here and here.
See also this.
On Windows, Memory Mapped File flushes are synchronous operations. When the OS Virtual Memory Manager is asked to flush a memory mapped file, it makes a synchronous write request to the file cache manager in the OS. This causes large I/O stalls on Windows systems with high Disk IO latency, while on Linux the same writes are asynchronous.
The problem becomes critical on high-latency disk drives like Azure persistent storage (10ms). This behavior results in very long bg flush times, capping disk IOPS at 100. On low latency storage (local storage and AWS) the problem is not that visible.
On Windows 7 and Windows Server 2008 R2 when applying the hotfix you get a better file allocation performance what is relevant for MongoDB
I am wondering what would be the best way to Copy a file src to dest within Scala that will be wrapped in an Akka Actor and possibly using a RemoteActor with several machines.
I have a tremendous amount of image files I have to copy from one directory to a NFS mounted directory.
Haven't done much FileHandling in Java or Scala, but know there is the NIO lib and some others out there that have been worked on since Scala 2.7. Something that would be the safest and quickest.
I probably should give some idea of my infrastructure as well. The connection is 1000 MB's in which connects via a Cisco3560 from an Isilon node to a Windows 2003 Server. The Isilon node is the NFS mount and the Windows 2003 Server is a highly configured Samba(Cifs) mount.
You probably can't beat the underlying OS file copy speed, so if the files are large or you can batch them, you're probably best off writing a shell script with Scala and then calling it with bash or somesuch. Chances are that one thread can saturate the disk IO, so there really isn't anything fancy to do. If the images are large, you'll be waiting for the 50ish MB/s limit on your disk (or 10ish MB/s limit on your 100 Mbps ethernet); if they're small, you'll be waiting for the quite-some-dozens of ms overhead on file seeks and network ping times and so on.
That said, you can use Apache Commons IO, which has a file copy utility, and a previous question has a high-performance answer among the top rated entries. You can have one actor handle all the copying tasks, and that should be as fast as if you have a bunch of actors all trying to compete for the same limited IO bandwidth.
I am hosting IIS based web service applications on Windows 2008 64-bit system running on a Quad core 8G machine. Ran into couple of instances when W3WP was running at 7.6G of memory usage. Nothing else was responding on the system including RDP. Right click on the process from the task manager and creating the dumps, froze the system and all its threads for a long time (close to 30minutes). When the freeze up occurred during off hours, we let the dump run for a while (ran close to 1 hour) but still dump didn't complete. In the interest of getting the system up, we had to kill IIS
Tried other tools like procexp, debug diag etc to create full memory dump and all have the same results
So, what tool does the community use to grab dump files quickly? Or without freezing all the threads? I realize latter might be a rhetorical question. But what are the options for generating such a large dump file without locking up the system for a long time?
IMO you shouldn't have to wait until the process memory grows to 8 GB. I am sure with something like 3 - 4 GB you should be able to detect the memory leak.
Procdump has an option based on memory threshold
-m Memory commit threshold in MB at which to create a dump of the process.
I would you this option to dump the memory of the process.
And also SSD would help in writing faster.
WPA a.k.a xperf (http://msdn.microsoft.com/en-us/performance/cc825801.aspx) is a powerfull tool, to diagnose the applications. You will get call stack of the culprit allocation. You dont have to collect the dump and it is no-invasive and does not load much in production systems
Complete step by step information is available here. http://msdn.microsoft.com/en-us/library/ff190906(v=VS.85).aspx.