How to reduce size and size on disk? - katalon-studio

We have created Automation Projects using Katalon Studio..
Currently The project folder size is it shows like:
Size: 1.61 MB
Size on Disk: 4.45 MB
Contains: 1033 Files, 444 Folders
How to reduce the difference between Size and Size on Disk.. When project grows is it needs to be sorted out?

This is probably related to your disk cluster size. Files can be no smaller than the cluster size, which is usually somewhere in the range of a few KB. For example, if your cluster size is 4KB then a 1 byte file will still take up 4KB on the disk. Generally this is more noticeable when you have many small files. If you want to change this you will need to reformat your filesystem and choose a smaller cluster size.

Related

Batched Downsizing of GB Sized TIF Images

I have 300+ TIFF images ranging in size from 600MB to 4GB; these are medical images converted from MRXS file format.
I want to downsize and make copies of the images at 512 x 445-pixel dimensions, as most of the files are currently sized at 5 figure dimensions (86K x 75K pixels). I need to downsize the files so that I can extract features from the images for a classification related problem.
I am using an i5-3470 CPU, Windows 10 Pro machine, with 20 GB RAM and a 4TB external HDD which holds the files.
I've tried a couple of GUI and command-based applications like XnConvert, Total Image Converter, but each GUI or CMD function causes the application to freeze up.
Is downsizing such large files even feasible using my hardware, or must I try a different command/BAT approach?

How large does the FAT structure and how large is the file?

Consider the following parameters of a FAT based lesystem:
Blocks are 8KB (213 bytes) large
FAT entries are 32 bits wide, of which 24 bits are used to store a block address
A. How large does the FAT structure need to be accommodate a 1GB (2^30 bytes) disk?
B. What is the largest theoretical le size supported by the FAT structure from part (A)?
A. How large does the FAT structure need to be accommodate a 1GB (2^30 bytes) disk?
The FAT file system splits the space into clusters, then has a table (the "cluster allocation table" or FAT) with an entry for each cluster (to say if it's free, faulty or which cluster is the next cluster in a chain of clusters). To work out size of the "cluster allocation table" divide the total size of the volume by the size of a cluster (to determine how many clusters and how many entries in the "cluster allocation table"), then multiply by the size of one entry, then maybe round up to a multiple of the cluster size or not (depending on which answer you want - actual size or space consumed).
B. What is the largest theoretical le size supported by the FAT structure from part (A)?
The largest file size supported is determined by either (whichever is smaller):
the size of "file size" field in the file's directory entry (which is 32-bit for FAT32 and would therefore be 4 GiB); or
the total size of the space minus the space consumed by the hidden/reserved/system area, cluster allocation table, directories and faulty clusters.
For a 1 GiB volume formatted with FAT32, the max. size of a file would be determined by the latter ("total space - sum of areas not usable by the file").
Note that if you have a 1 GiB disk, this might (e.g.) be split into 4 partitions and a FAT file system might be given a partition with a fraction of 1 GiB of space. Even if there is only one partition for the "whole" disk, typically (assuming "MBR partitions" and not the newer "GPT partitions" which takes more space for partition tables, etc) the partition begins on the second track (the first track is "reserved" for MBR, partition table and maybe "boot manager") or a later track (e.g. to align the start of the partition to a "4 KiB physical sector size" and avoid performance problems caused by "512 logical sector size").
In other words, the size of the disk has very little to do with the size of the volume used for FAT; and when questions only tell you the size of the disk and don't tell you the size of the partition/volume you can't provide accurate answers.
What you could do is state your assumptions clearly in your answer, for example:
"I assume that a "1 GB" disk is 1000000 KiB (1024000000 bytes, and not 1 GiB or 1073741824 bytes, and not 1 GB or 1000000000 bytes); and I assume that 1 MiB (1024 KiB) of disk space is consumed by the partition table and MBR and all remaining space is used for a single FAT partition; and therefore the FAT volume itself is 998976 KiB."

Whats the relationship between file system block size and disk space wasted per file

What is the relationship between file system block size and
disk space wasted per file.
How can reducing the file system block size could reduce
the available/free disk space.
Its the CLUSTER SIZE that results in "wasted space." On hard file systems, disk space is allocated in clusters. Clusters are multiples of blocks. The block size is determined by the hardware.
The smaller the cluster size, the more clusters there are on the disk, the more overhead that is required to manage those clusters. Usually this is one or more bit maps with a bit per cluster.
Larger cluster size = lower overhead.
The tradeoff is that, if you need just one additional byte of storage, you have to allocate an entire cluster for it. The amount of "wasted" space grows with the size of the cluster.
Larger cluster sizes tend to be more efficient with larger files.
Smaller cluster sizes tend to be more efficient with smaller files.

What is the maximum size of single file in iPhone?

I have searched some posts and cannot find what the maximum filesize is under iPhone.
max size of an iOS application
maximum size of sqlite or database on iOS
As the above posts said, the maximum filesize depends on the free disk space. So, can I store everything into sqlite file and it's filesize can exceed 4GB or 10GB ?
According to the following links I found,
Mac OS, HFS File System volumn and file limits
iOS filesystem, HFSX
HFS, Wiki
As the first link says, "The theoretical maximum file size for a Mac OS Extended file system is millions of terabytes. In practice, the maximum file size is equivalent to the maximum volume size, except for a small amount of disk space reserved for file system information."
Because the maximum filesize is equal to the maximum volumn size, and consider the factor about the free disk space.
So, in my conclusion, the maximum size of single file depends on the free disk space.

How big can a memory-mapped file be?

What limits the size of a memory-mapped file? I know it can't be bigger than the largest continuous chunk of unallocated address space, and that there should be enough free disk space. But are there other limits?
You're being too conservative: A memory-mapped file can be larger than the address space. The view of the memory-mapped file is limited by OS memory constraints, but that's only the part of the file you're looking at at one time. (And I guess technically you could map multiple views of discontinuous parts of the file at once, so aside from overhead and page length constraints, it's only the total # of bytes you're looking at that poses a limit. You could look at bytes [0 to 1024] and bytes [240 to 240 + 1024] with two separate views.)
In MS Windows, look at the MapViewOfFile function. It effectively takes a 64-bit file offset and a 32-bit length.
This has been my experience when using memory-mapped files under Win32:
If your map the entire file into one segment, it normally taps out at around 750 MB, because it can't find a bigger contiguous block of memory. If you split it up into smaller segments, say 100MB each, you can get around 1500MB-1800MB depending on what else is running.
If you use the /3g switch you can get more than 2GB up to about 2700MB but OS performance is penalized.
I'm not sure about 64-bit, I've never tried it but I presume the max file size is then limited only by the amount of physical memory you have.
Under Windows: "The size of a file view is limited to the largest available contiguous block of unreserved virtual memory. This is at most 2 GB minus the virtual memory already reserved by the process. "
From MDSN.
I'm not sure about LINUX/OSX/Whatever Else, but it's probably also related to address space.
Yes, there are limits to memory-mapped files. Most shockingly is:
Memory-mapped files cannot be larger than 2GB on 32-bit systems.
When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes.
Even on my 64-bit, 32GB RAM system, I get the following error if I try to read in one big numpy memory-mapped file instead of taking portions of it using byte-offsets:
Overflow Error: memory mapped size must be positive
Big datasets are really a pain to work with.
The limit of virtual address space is >16 Terabyte on 64Bit Windows systems. The issue discussed here is most probably related to mixing DWORD with SIZE_T.
There should be no other limits. Aren't those enough? ;-)