How to use PowerShell to detect unallocated space on a MBR disk? - powershell

I am currently facing an issue with one of my MBR disks on a Windows server. The disk is supposed to have a maximum size of 2048GB, but I have noticed that there is additional space allocated to it from the storage end. After checking the disk, I found out that there is about 152GB of unallocated space on the disk.
It is worth noting that if the allocated size was less than 2048GB, it would be easy to detect the unallocated space using the diskpart command. However, in this case, the allocated size is more than the maximum size, which makes it difficult to detect the unallocated space.
I have tried to detect this issue using various methods, such as the diskpart command, but I was unable to find any information about this unallocated space. I also tried using the procmon tool and opening the diskmgmt snap-in, but the log was too huge and I couldn't find any useful information.
I am now seeking help from the community to come up with a PowerShell script or command that can detect this unallocated space on the MBR disk. I have been searching online but haven't found any solutions that work for me. If anyone has any experience with this issue or know of a PowerShell script that can detect unallocated space on an MBR disk, please share it with me. I would greatly appreciate any help or guidance on this matter.

I'm not sure how accurate this is, but you can try and count the total amount of allocated space and subtract that number from the total underlying disk size to get the maximum unallocated space:
# Query the Win32_DiskDrive instance representing the physical disk
$disk = Get-CimInstance Win32_DiskDrive -Filter 'Index = 2'
# Find all associated partitions
$partitions = $disk |Get-CimAssociatedInstance -ResultClassName Win32_DiskPartition
# Calculate total allocation for configured partitions
$allocated = $partitions |Measure-Object -Sum Size |Select-Object -Expand Sum
# Calculate the remaining void
$unallocated = $disk.Size - $allocated
Write-Host ("There is {0}GB of disk space unallocated" -f $($unallocated/1GB))

Related

avg. disk write queue length performance counter for individual disk in powershell

I am trying to extract the disk realted perf counters. Below is the query that I am using now.
get-counter -counter '\PhysicalDisk(*)\Avg. Disk Write Queue Length'| select -ExpandProperty countersamples
The above returns an expected result for each physical disk separately as shown below.
However, when we execute the same query on a cluster based env, the drive details cannot be obtained individually.
I need to extract the perf counter values for the disk individually on clustered env as well.

Powershell get overall CPU usage

I have a script that outputs my overall cpu usage. But if I compare this to the Task Manager, I get a different number. Is my script just wrong or is there a better way to do it?
$cpu = Get-WmiObject win32_processor
logwrite $cpu.LoadPercentage
Task Manager says 26% while the output file says 1%. My script says 0%, 1% or 2% most of of the time.
The reason being, CPU Usage fluctuates with each passing moment, and that is reflected in your task manager. If you see your task manager the CPU usage will be fluctuating every time.
$cpu.LoadPercentage from your script gives you the CPU usage at the time of creation of your output file. Hence, you see the anomalies. You should be looking for a more dynamic way of getting CPU usage or getting it in intervals.

Get total amount of memory on NUMA systems in Powershell

I'm trying to get the total amount of memory in a NUMA system.
My test system is a two socket physical server with 512GB of memory.
This are the methods I tried:
Win32_PhysicalMemory
(Get-WmiObject Win32_PhysicalMemory | Measure-Object -Property Capacity -Sum).Sum)/1GB
This returns 255.99 GB.
Cim_PhysicalMemory
$MaxRam = 0
$RamPool = (Get-CimInstance -Class "Cim_PhysicalMemory" | % {$_.Capacity})
foreach ($RamBank in $RamPool) {
$MaxRam += $RamBank/1024/1024
}
This returns 255.99 GB.
As you can see I always get 256 GB as result. In the BIOS I see that each socket has 256 GB assigned to it.
How can I get the memory for all NUMA-nodes?
A solution I thought of was take the amount of memory, found with above methods, and multiply it with the number of sockets.
Is this a correct assumption?

How to keep 32 bit mongodb memory usage down on changing dataset

I'm using MongoDB on a 32 bit production system, which sucks but it's out of my control right now. The challenge is to keep the memory usage under ~2.5GB since going over this will cause 32 bit systems to crash.
According to the mongoDB team, the best way to track the memory usage is to use your operating system's process tracking system (i.e. ps or htop on Unix systems; Process Explorer on Windows.) for virtual memory size.
The DB mainly consists of one table which is continually cycling data, i.e. receiving data at regular intervals from sensors, and every day a cron job wipes all data from before the last 3 days. Over a period of time, the memory usage slowly increases. I took some notes over time using db.serverStats(), db.lectura.totalSize() and ps, shown in the chart below. Note that the size of the table in question has reduced in the last month but the memory usage increased nonetheless.
Now, there is some scope for adjustment in how many days of data I store. Today I deleted basically half of the data, and then restarted mongodb, and yet the mem virtual / mem mapped and most importantly memory usage according to ps have hardly changed! Why do these not reduce when I wipe data (and restart)? I read some other questions where people said that mongo isn't really using all the memory that it might appear to be using, and that you can't clear the cache or limit memory use. But then how can I ensure I stay under the 2.5GB limit?
Unless there is a way to stem this dataset-size-irrespective gradual increase in memory usage, it seems to me that the 32-bit version of Mongo is unuseable. Note: I don't mind losing a bit of performance if it solves the problem.
To answer regarding why the mapped and virtual memory usage does not decrease with the deletes, the mapped number is actually what you get when you mmap() the entire set of data files. This does not shrink when you delete records, because although the space is freed up inside the data files, they are not themselves reduced in size - the files are just more empty afterwards.
Virtual will include journal files, and connections, and other non-data related memory usage also, but the same principle applies there. This, and more, is described here:
http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage
So, the 2GB storage size limitation on 32-bit will actually apply to the data files whether or not there is data in them. To reclaim deleted space, you will have to run a repair. This is a blocking operation and will require the database to be offline/unavailable while it was run. It will also need up to 2x the original size in terms of free disk space to be able to run the repair, since it essentially represents writing out the files again from scratch.
This limitation, and the problems it causes, is why the 32-bit version should not be run in production, it is just not suitable. I would recommend getting onto a 64-bit version as soon as possible.
By the way, neither of these figures (mapped or virtual) actually represents your resident memory usage, which is what you really want to look at. The best way to do this over time is via MMS, which is the free monitoring service provided by 10gen - it will graph virtual, mapped and resident memory for you over time as well as plenty of other stats.
If you want an immediate view, run mongostat and check out the corresponding memory columns (res, mapped, virtual).
In general, when using 64-bit builds with essentially unlimited storage, the data will usually greatly exceed the available memory. Therefore, mongod will use all of the available memory it can in terms of resident memory (which is why you should always have swap configured to the OOM Killer does not come into play).
Once that is used, the OS does not stop allocating memory, it will just have the oldest items paged out to make room for the new data (LRU). In other words, the recycling of memory will be done for you, and the resident memory level will remain fairly constant.
Your options for stretching 32-bit are limited, but you can try some things. The thing that you run out of is address space, and the increases in the sizes of additional database files mean that you would like to avoid crossing over the boundary from "n" files to "n+1". It may be worth structuring your data into more or fewer databases so that you can get the maximum amount of actual data into memory and as little as possible "dead space".
For example, if your database named "mydatabase" consists of the files mydatabase.ns (the namespace file) at 16 MB, mydatabase.0 at 64 MB, mydatabase.1 at 128 MB and mydatabase.2 at 256 MB, then the next file created for this database will be mydatabase.3 at 512 MB. If instead of adding to mydatabase you instead created an additional database "mynewdatabase" it would start life with mynewdatabase.ns at 16 MB and mynewdatabase.0 at 64 MB ... quite a bit smaller than the 512 MB that adding to the original database would be. In fact, you could create 4 new databases for less space than would be consumed by adding a new file to the original database, and because the files are smaller they would be easier to fit into contiguous blocks of memory.
It is a well-known message that 32-bit should not be used for production.
Use 64-bit systems.
Point.

what is the suggested number of bytes each time for files too large to be memory mapped at one time?

I am opening files using memory map. The files are apparently too big (6GB on a 32-bit PC) to be mapped in one ago. So I am thinking of mapping part of it each time and adjusting the offsets in the next mapping.
Is there an optimal number of bytes for each mapping or is there a way to determine such a figure?
Thanks.
There is no optimal size. With a 32-bit process, there is only 4 GB of address space total, and usually only 2 GB is available for user mode processes. This 2 GB is then fragmented by code and data from the exe and DLL's, heap allocations, thread stacks, and so on. Given this, you will probably not find more than 1 GB of contigous space to map a file into memory.
The optimal number depends on your app, but I would be concerned mapping more than 512 MB into a 32-bit process. Even with limiting yourself to 512 MB, you might run into some issues depending on your application. Alternatively, if you can go 64-bit there should be no issues mapping multiple gigabytes of a file into memory - you address space is so large this shouldn't cause any issues.
You could use an API like VirtualQuery to find the largest contigous space - but then your actually forcing out of memory errors to occur as you are removing large amounts of address space.
EDIT: I just realized my answer is Windows specific, but you didn't which platform you are discussing. I presume other platforms have similar limiting factors for memory-mapped files.
Does the file need to be memory mapped?
I've edited 8gb video files on a 733Mhz PIII (not pleasant, but doable).