How big image can be read and displayed in matlab? - matlab

I have a problem with image reading. I want to make sure how big image can be read and displayed in matlab? It is possible to display huge images like (12689,4562,7). If not, how can I check whether this image loaded correctly in matlab?
Thanks a lot

There are two questions here:
Is it possible to load a large image from the disk to RAM?
Is it possible to show a large image?
The answer to the first question is that it depends on your amount of RAM and operating system. The answer to the second question is that Matlab (or any program) downscales the image before showing, since there aren't that much pixels on the image. So it depends on the internal algorithm, and again, on your amount of RAM.

The number of MB of RAM required for such an image would be (assuming 8 bits/pixel (uint8)):
12689*4562*7 / 1e6 = 405.2 MB
The number of elements a single matrix can contain in your version of Matlab:
[~, numEls] = computer;
which is 2.147483647e+09 on my 32-bit R2010b. This is much more than 12689*4562*7, so in principle, if you have 406GB of unused RAM, you should be able to load the image in its entirety into RAM. And in principle, displaying said image will involve some additional RAM (and probably take a looong time), but should nevertheless be possible (aside from the fact that displaying an image with 7 colour layers is not very standard AFAIK).

Related

MATLAB: Are there any problems with many (millions) small files compared to few (thousands) large files?

I'm working on a real-time test software in MATLAB. On user input I want to extract the value of one (or a few neighbouring) pixels from 50-200 high resolution images (~25 MB).
My problem is that the total image set is to big (~2000 images) to store in RAM, consequently I need to read each of the 50-200 images from disk after each user-input which of course is way to slow!
So I was thinking about splitting the images into sub-images (~100x100 pixels) and saving these separately. This would make the image-read process quick enough.
Are there any problems I should be aware of with this approach? For instance I've read about people having trouble copying many small files, will this affect me to i.e. make the image-read slower?
rahnema1 is right - imread(...,'PixelRegion') will fasten read operation. If it is not enough for you, even if your files are not fragmented, may be it is time to think about some database?
Disk operations are always the bottleneck. First we switch to disk caches, then distributed storage, then RAID, and after some more time, we finish with in-memory databases. You should choose which access speed is reasonable.

What are the maximum number of columns in the input data in MATLAB

i must import a big data file in matlab , and its size is abute 300 MB.
now i want to know what are the maximum number of columns ,that i can imort to matlab. so divided that file to some small file.
please hellp me
There are no "maximum" number of columns that you can create for a matrix. What's the limiting factor is your RAM (à la knedlsepp), the data type of the matrix (which is also important... a lot of people overlook this), your operating system, and also what version of MATLAB you're using - specifically whether it's 32 or 64 bit.
If you want a more definitive answer, here's a comprehensive chart from MathWorks forums on what you can allocate given your OS version, MATLAB version and the data type of the matrix you want to create:
The link to this post is here: http://www.mathworks.com/matlabcentral/answers/91711-what-is-the-maximum-matrix-size-for-each-platform
Even though the above chart is for MATLAB R2007a, the sizes will most likely not have changed over the evolution of the software.
There are a few caveats with the above figure that you need to take into account:
The above table also takes your workspace size into account. As such, if you have other variables in memory and you are trying to allocate a matrix that tries to reach the limit seen in the charge, you will not be successful in its allocation.
The above table assumes that MATLAB has just been launched with no major processing carried out in a startup.m file.
The above table assumes that there is unlimited system memory, so RAM plus any virtual memory or swap file being available.
The above table's actual limits will be less if there is insufficient system memory available, usually due to the swap file being too small.

How to read a large image with tiff format in Matlab?

How to read a large image with tiff format in Matlab?
I am trying to read an image with the size of 1.970.654 k in matab and later on I would like to crop the image into tiles. When I use the imread code it gives me an error of :
'Out of memory. Type HELP MEMORY for your options.'
I would really appreciate it if sb could kindly help me with it.
Do you have enough memory? If you only have 2 GB of memory I doubt there is even enough memory allocated to Matlab to do that.
Also, a workaround might just be to use another editor to split the image into separate sets of TIFFs and then perform your analysis on each set.

Why do game developers put many images into one big image?

Over the years I've often asked myself why game developers place many small images into a big one. But not only game developers do that. I also remember the good old Winamp MP3 player had a user interface design file which was just one huge image containing lots of small ones.
I have also seen some big javascript GUI libraries like ext.js using this technique. In ext.js there is a big image containing many small ones.
One thing I noticed is this: No matter how small my PNG image is, the Finder on the Mac always tells me it consumes at least 4kb. Which is heck of a lot if you have just 10 pixels.
So is this done because storing 20 or more small images into a big one is much more memory efficient versus having 20 separate files, each of them probably with it's own header and metadata?
Is it because locating files on the file system is expensive and slow, and therefore much faster to simply locate only one big image and then split it up into smaller ones, once it is loaded into memory?
Or is it lazyness, because it is tedious to think of so many file names?
Is ther a name for this technique? And how are those small images separated from the big one at runtime?
This is called spriting - and there are various reasons to do it in different situations.
For web development, it means that only one web request is required to fetch the image, which can be a lot more efficient than several separate requests. That's more efficient in terms of having less overhead due to the individual requests, and the final image file may well be smaller in total than it would have been otherwise.
The same sort of effect may be visible in other scenarios - for example, it may be more efficient to store and load a single large image file than multiple small ones, depending on the file system. That's entirely aside from any efficiencies gained in terms of the raw "total file size", and is due to the per file overhead (a directory entry, block size etc). It's a bit like the "per request" overhead in the web scenario, but due to slightly different factors.
None of these answers are right. The reason we pack multiple images into one big "sprite sheet" or "texture atlas" is to avoid swapping textures during rendering.
OpenGL and Direct-X take a performance hit when you draw from one image (texture) and the switch to another, so we pack multiple images into one big image and then we can draw several (or hundreds) of images and never switch textures. It has nothing to do with the 4K file size (or hasn't in 15 years).
Also, up until very recently, textures had to by powers of 2 (64, 128, 256) and if your game had lots of odd sized images, that's a lot of wasted memory. Packing them in a single texture could save a lot of space.
The 4kb usage is a side effect of how files are stored on disk. The smallest possible addressable bit of storage in a filesystem is a block, which is usually a fixed size of 512, 1024, 2048, etc... bytes. In your Mac's case, it's using 4k blocks. That means that even a 1-byte file will require at least 4kbytes worth of physical space to store, as it's not possible for the file system to address any storage unit SMALLER than 4k.
The reasons for these "large" blocks vary, but the big one is that the more "granular" your addressing gets (the small the blocks), the more space you waste on indexes to list which blocks are assigned to which files. If you had 1-byte sized blocks, then for every byte of data you store in a file, you'd also need to store 1+ bytes worth of usage information in the file system's metadata, and you'd end up wasting at least HALF of your storage on nothing but indexes.
The converse is true - the bigger the blocks, the more space is wasted for every smaller-than-one-block sized file you store, so in the end it comes down to what tradeoff you're willing to live with.
The reasons are a bit different in different environments.
On the web the main reason is to reduce the number of requests to the web server. Each requests creates overhead, most notably a separate round trip over the network.
When fetching from good ol' mechanical hard drives good read performance requires contiguous data. If you save data in lots of files you get extra seek-time for each file. There is also the block size to consider. Files are made out of blocks, in your case 4kB. When reading a file of one byte you need to read a whole block anyway. If you have many small images you can stuff a whole bunch of them in a single disk block and get them all in the same time as if you had only one small image in the block.
Another reason from days of yore was palletes.
If you did one image you could theme it with one pallete Colour = 14 = light grey with a hint of green.
If you did lots of little images you had to make sure you used the same pallet for every one while designing them, or you got all sort of artifacts.
Given you had one pallete then you could manipulate that, so everything currently green could be made red, by flipping one value in the palletes instead of trawling through every image.
Lots of simple animations like fire, smoke, running water are still done with this method.

Where do I find the memory requirements of a MATLAB function?

I have a 3D array of values (0 or 1), which is very large (approx 2300x2300x11). I want to fit a surface to these values using for example interp3, but when I try MATLAB runs out of memory. Thus, I've decided to reduce the size of my array enough for MATLAB to accomodate it in memory.
Now, the smaller I make the reduced array, the worse my results will be (the surface fitting is part of a measurement process with high precision requirements), so I want to reduce the array as little as possible.
Is there any way to determine on beforehand how much memory a certain array size will demand and how much memory is available, and then use this information to resize the array enough to avoid out of memory exceptions, but not more?
I don't know the answer to this, but I wonder if you can have your cake and eat it, too.
If your data set is too big, why not do a piecewise fit? Do it in chunks rather than omitting data points.
Or be smarter about how you omit data points. You want them in areas of high curvature - where your data is changing fastest. Leave out points in areas far away from the action, where nothing interesting is happening. You might have to do a fit, look at the surface, add and remove more points and try again.
It might an iterative process, but I'll bet you'll be able to get a nice fit with a little luck and effort.
You can look at the maximum array sizes that are supported on different platforms. In general, if you have a PxQxR sized 3D array of doubles, then the size of your array in bytes is P*Q*R*8. For your matrix, the size is ~ 444 MB. You can also try reducing it to a single, using single(A). single uses 4 bytes per element and you can reduce the size of your array by a factor 2.
I haven't really poked into the inner workings of interp3, but the exact memory requirements will depend on the interpolation option you choose. So, you can first try to convert it to single and see if it works. If not, try with 80% (90%) of the number of rows and columns. This way you have a good chunk of the original array, but the memory requirement is only 64% (81%) of the original.
If that doesn't help, duffymo's suggestion is what you should be looking into.