I find that importing in large multi-tiff files through the imread function is slow for what I need to do. I used googlefoo and located a tutorial explaining a faster way to load tif files. I tried to use the code (below) but it keeps crashing halfway through. As the tutorial stated, might have to change the code based on what I'm trying to load, and this is where I am stuck. I do not know enough about the tifflib to try and change the code.
Would anyone be able to help explain the code so I know where to start with processing the data.
InfoImage=imfinfo('file.tif');
mImage=InfoImage(1).Width;
nImage=InfoImage(1).Height;
NumberImages=length(InfoImage);
FinalImage=zeros(nImage,mImage,NumberImages,'uint16');
FileID = tifflib('open',loaderpath,'r');
rps = tifflib('getField',FileID,Tiff.TagID.RowsPerStrip);
for i=1:NumberImages
tifflib('setDirectory',FileID,i);
% Go through each strip of data.
rps = min(rps,nImage);
for r = 1:nImage
row_inds = r:min(nImage,r+rps-1);
stripNum = tifflib('computeStrip',FileID,r);
FinalImage(row_inds,:,i) = tifflib('readEncodedStrip',FileID,stripNum);
end
end
tifflib('close',FileID);
Tiff file format
Filename 'img_stack.tif'
FileModDate '25-Oct-2017 09:31:40'
FileSize 49960190
Format 'tif'
FormatVersion []
Width 1280
Height 720
BitDepth 8
ColorType 'grayscale'
FormatSignature [73,73,42,0]
ByteOrder 'little-endian'
NewSubFileType 0
BitsPerSample 8
Compression 'PackBits'
PhotometricInterpretation 'BlackIsZero'
StripOffsets 1x66 double
SamplesPerPixel 1
RowsPerStrip 11
StripByteCounts 1x66 double
XResolution 72
YResolution 72
ResolutionUnit 'Inch'
Colormap []
PlanarConfiguration 'Chunky'
TileWidth []
TileLength []
TileOffsets []
TileByteCounts []
Orientation 1
FillOrder 1
GrayResponseUnit 0.0100000000000000
MaxSampleValue 255
MinSampleValue 0
Thresholding 1
Offset 19840
Related
This question already has an answer here:
How to get the real life size of an object from an image, when not knowing the distance between object and the camera?
(1 answer)
Closed 5 years ago.
I have a binary image which contains BLOBs with distinct size.
Input Image
I can calculate the area using nnz(), which calculated the number of white pixels.
% my code
C = imread( 'InputImage' );
C = im2bw( C );
carea = nnz( C );
disp( carea );
%
But I want to know their value in centimeter or millimeter.
Is it possible?
How?
There is a possible way to estimate/calculate the size of an object in image.
If digital image itself is the only information you have, you can't know. Otherwise, you need to get "spatial calibration factor", shortly, you image an object with same distance to camera with known size and get the pixel per centimeters, a long answer can be found:
https://www.mathworks.com/matlabcentral/answers/56087-how-can-i-find-the-spatial-calibration-factor
Normally, such task would require a segmentation and localization of the object, but, I assume, in your problem image is always binary and there is only one white object. Now, you also need to know, the precision of your measurement will be bounded by the discretization error. For example, if you take a photo of 10.49 meters to 10.49 meters square with regard to camera position, object's photo, with very low resolution(e.g 100 pixels, 10 pixels), up to 0.5 meter error will prevail, and you might simply miss 49 centimeters in each 2 dimensions and with a binary digital image, there is not much you can do to get rid off this error.
I don't know what the source of the image is, but PNG files have header fields that encode the resolution. You can use the function imfinfo to get this information. For your image, this is what I get:
>> info = imfinfo('czYGP.png')
info =
struct with fields:
Filename: 'czYGP.png'
FileModDate: '08-May-2017 15:00:13'
FileSize: 1275
Format: 'png'
FormatVersion: []
Width: 266
Height: 280
BitDepth: 24
ColorType: 'truecolor'
FormatSignature: [137 80 78 71 13 10 26 10]
Colormap: []
Histogram: []
InterlaceType: 'none'
Transparency: 'none'
SimpleTransparencyData: []
BackgroundColor: []
RenderingIntent: 'perceptual'
Chromaticities: [0.3127 0.3290 0.6400 0.3300 0.3000 0.6000 0.1500 0.0600]
Gamma: 0.4545
XResolution: 3779
YResolution: 3779
ResolutionUnit: 'meter'
XOffset: []
YOffset: []
OffsetUnit: []
SignificantBits: []
ImageModTime: []
Title: []
Author: []
Description: []
Copyright: []
CreationTime: []
Software: []
Disclaimer: []
Warning: []
Source: []
Comment: []
OtherText: []
The interesting fields here are 'XResolution' and 'YResolution', both in pixels per 'ResolutionUnit', which is 'meter'. Using this info, we can compute the pixel size for your image:
pixelSize = 100./[info.XResolution info.YResolution]; % In cm/pixel
pixelSize = 1000./[info.XResolution info.YResolution]; % In mm/pixel
Now, all you have to do is multiply any area measurements you get by the pixel area, like so:
carea = nnz(C)*prod(pixelSize);
NOTE: Of course, this all assumes that this header information was set to the proper values, and not just default values or arbitrarily set or modified at any point. This is why the source of the image matters, and if it's trustworthy (like if it's from some medical imaging device or software).
I am having problem with increasing opacity to the image. My original image is of 230 KB
after i resize the image using the code:
Method 1: imh=imgg.resize((1000,500),Image.ANTIALIAS) #filesize is 558 KB
Method 2: imh=imgg.resize((1000,500),Image.ANTIALIAS)
im2 = imh.convert('P', palette=Image.ADAPTIVE) #filesize is 170KB
Now i am adding transparency of the image by using this code:
def reduce_opacity(im, opacity,pathname):
assert opacity >= 0 and opacity <= 1
if im.mode != 'RGBA':
im = im.convert('RGBA')
else:
im = im.copy()
alpha = im.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(opacity)
im.putalpha(alpha)
jj=<pathname>
im.save(jj)
return im
Method 1: filesize is 598 KB
Method 2: filesize is 383 KB
So the best code i got till now is
imh=imgg.resize((1000,500),Image.ANTIALIAS)
im2 = imh.convert('P', palette=Image.ADAPTIVE)
reduce_opacity(im2,0.5,name)
which gives me a file size of 383KB. To add opacity it has to be opened in RGBA mode which increase the file size from 170 KB to 383 KB.I am not satisfied with this, i need to reduce the size more, it there any way that i can achieve that, not compromising the quality to a great extent?
Could you please help me with this question:
Assume that on average in binary images, 75% of the pixels are white, and 25% are black. What is the entropy of this source? Model this source in Matlab and generate some sample images according to this process
To find the entropy, you just need to apply the definition:
H = -0.25 * log2(0.25) - 0.75 * log2(0.75)
Since we are using log2, the result will be in bits.
As for generating a Matlab B&W (i.e. binary) image of size 512x512, you can simply do:
im = rand(512) < 0.75;
By convention, true = 1 = white and false = 0 = black.
This question maybe pretty basic, so please bear with me. I have 4 pixel coordinats and an image. I want to segment the image part within this 4 points alone and make a new image. Can you please tell me the easiest way to do this?
Look at roipoly using r and c inputs in addition to input image I.
Assuming you have a coordinate list xcoord matching with ycoord and want to have the smallest square that contains your pixels:
myImage = rand(100)
xcoord = [12 16 22 82];
ycoord = [24 70 12 34];
mySegment = myImage(xcoord(min):xcoord(max),ycoord(min):ycoord(max))
I'm trying to find a way to compare two images.
Let me give you an example so you can better understand: my App will randomize a color (she will randomize a value from 0 to 255 for the R, then for the G and then for the B and the result is a completely random RGB color).
Now the user will take a photo from the camera of the iPhone and the App will comprare the color with the image.
Example: The App select the RGB = 205,133,63 wich is brown: the user will take a picture of a brown detail of a desk. Now I need to compare the brown selected by the iPhone and the brown of the picture and display a result (for example: "the pictures is faithful to 88% compared to the given color").
I found this example in internet, but I can figure out how I can implement this in my App: http://www.youtube.com/watch?v=hWRPn7IsChI
Thanks!!
Marco
There are plenty of ways you can do this. If you want to keep it simple, you can average the colours for your entire image. For the sake of simplicity, let's say your image only has two pixels:
RGB0 = 255, 0, 0 (red)
RGB1 = 0, 0, 0 (black)
Average between the two will be
RGB_AVG = 128, 0, 0 (dark red)
Now you can calculate the difference between this average and the selected colour 205, 133, 63. This too you can do in many different ways. Here is one:
R = 205 - 128 = 80
G = 133 - 0 = 133
B = 63 - 0 = 63
Total = 80 + 133 + 63 = 276
Total score = (756 - 276) / 756 = 63.5%
This is just one way, you could collect all colours in a dictionary and count them if you need it to be super accurate. It all depends on what you want to achieve.
Btw: Reassure your numbers don't end up being higher if they are higher than your sample color. Use ABS or whatever method you want. The above is just an example.