I am preparing training images in Matlab. The problem is the number of images are too much and the size of variables are huge as well. Here is the specification of work:
There are 10 .mat files each contain in average 15000 images. It is in format of 1x15000 cell and the size of each file is in average 1.35 GB.(900 kilobytes average per image)
Average size of each image is 110x110 pixels, each image has different dimensions. Each cell is saved as single type with value between 0 and 1.
Loading all 10 .mat files at once is impossible because it makes Matlab freezes. My questions are:
Isn't the file sizes are too big? 900 kilobytes in average for small 110x110 pixel image is really too much,isn't it?!
Is usage of cell of single variable is the best practice for training images? or there exist a more convenient alternative variable type?
update: to compare the image size, this icon file with 110x110 pixel is around 2kb in comparison to 900 kb images in matlab!!!
Related
I have a image.tiff whose resolution is 30 meters. I need to assign elevations to a large area using this image but the problem is the size (12 million data), because when I try to run the griddata with the image.tiff, this spend a lot of time (12 min or more?)
Do you know a way to optimaze this process or a different function?
I have 300+ TIFF images ranging in size from 600MB to 4GB; these are medical images converted from MRXS file format.
I want to downsize and make copies of the images at 512 x 445-pixel dimensions, as most of the files are currently sized at 5 figure dimensions (86K x 75K pixels). I need to downsize the files so that I can extract features from the images for a classification related problem.
I am using an i5-3470 CPU, Windows 10 Pro machine, with 20 GB RAM and a 4TB external HDD which holds the files.
I've tried a couple of GUI and command-based applications like XnConvert, Total Image Converter, but each GUI or CMD function causes the application to freeze up.
Is downsizing such large files even feasible using my hardware, or must I try a different command/BAT approach?
Is there a maximum limit to the amount of metadata that can be incorporated in an individual field of TIFF file metadata? I'd like to store a large text (up to a few MB) in the ImageDescription field.
There's no specific maximum limit to the ImageDescription field, however, there's a maximum file size for the entire TIFF file. This maximum size is 4 GB. From the TIFF 6.0 spec:
The largest possible TIFF file is 2**32 bytes in length.
This is due to the offsets in the file structure, stored as unsigned 32 bit integers.
Thus, the theoretical maximum size is that which #cgohlke points out in the comments ("2^32 minus the offset"), but most likely you want to keep it smaller, if you also intend to include pixel data...
Storing "a few MB" should not be a problem.
I have a problem with image reading. I want to make sure how big image can be read and displayed in matlab? It is possible to display huge images like (12689,4562,7). If not, how can I check whether this image loaded correctly in matlab?
Thanks a lot
There are two questions here:
Is it possible to load a large image from the disk to RAM?
Is it possible to show a large image?
The answer to the first question is that it depends on your amount of RAM and operating system. The answer to the second question is that Matlab (or any program) downscales the image before showing, since there aren't that much pixels on the image. So it depends on the internal algorithm, and again, on your amount of RAM.
The number of MB of RAM required for such an image would be (assuming 8 bits/pixel (uint8)):
12689*4562*7 / 1e6 = 405.2 MB
The number of elements a single matrix can contain in your version of Matlab:
[~, numEls] = computer;
which is 2.147483647e+09 on my 32-bit R2010b. This is much more than 12689*4562*7, so in principle, if you have 406GB of unused RAM, you should be able to load the image in its entirety into RAM. And in principle, displaying said image will involve some additional RAM (and probably take a looong time), but should nevertheless be possible (aside from the fact that displaying an image with 7 colour layers is not very standard AFAIK).
Here I have binary image,and I need to compress it using Run-length encoding RLE.I used the regular RLE algorithm and using maximum count is 16.
Instead of reducing the file size, it is increasing it. For example 5*5 matrix, 10 values repeating count is one,that is making the file bigger.
How to avoid this glitch? Is there any better way I can apply RLE partially to the matrix?
If it's for your own usage only you can create your custom image file format, and in the header you can mark if RLE is used or not, and the range of coordinates of X and Y and possible the bit planes for which it is used. But if you want to produce an image file that follows some defined image file format that uses RLE (.pcx comes into my mind) you must follow the file format specifications. If I remember correctly, in .pcx there wasn't any option to disable RLE partially.
If you are not required to use RLE and you are only looking for an easy to implement compression method, before using any compression, I suggest that you first check how many bytes your 5x5 binary matrix file takes. If the file size is 25 bytes or more, then you are saving it using at least one byte (8 bits) for each element (or alternatively you have a lot of data which is not matrix content). If you don't need to store the size, 5x5 binary matrix takes 25 bits, which is 4 bytes and 1 bit, so practically 5 bytes. I'm quite sure that there's no compression method that is generally useful for files that have size of 5 bytes. If you have matrices of different sizes, you can use eg. unsigned integer 16-bit fields (2 bytes each) for maximum matrix horizontal/vertical size of 65535 or unsigned integer 32-bit fields (4 bytes each) for maximum matrix horizontal/vertical size of 4294967295.
For example 100x100 binary matrix takes 10000 bits, which is 1250 bytes. Add 2 x 2 = 4 bytes for 16-bit size fields or 2 x 4 = 8 bytes for 32-bit size fields. After this, you can plan what would be the best compression method.