Gradient vanishes when using batch normalization in caffe - neural-network

all
I run into problems when I use batch normalization in Caffe.
Here is the code I used in train_val.prototxt.
layer {
name: "conv1"
type: "Convolution"
bottom: "conv0"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 32
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.0589
}
bias_filler {
type: "constant"
value: 0
}
engine: CUDNN
}
}
layer {
name: "bnorm1"
type: "BatchNorm"
bottom: "conv1"
top: "conv1"
batch_norm_param {
use_global_stats: false
}
}
layer {
name: "scale1"
type: "Scale"
bottom: "conv1"
top: "conv1"
scale_param {
bias_term: true
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "conv16"
type: "Convolution"
bottom: "conv1"
top: "conv16"
param {
lr_mult: 1
decay_mult: 1
}
However, the training is not converging. By removing BN layers (batchnorm + scale), the training can converge. So I started to compare the log files with or without the BN layers. Here are the log files with debug_info = true:
With BN:
I0804 10:22:42.074671 8318 net.cpp:638] [Forward] Layer loadtestdata, top blob data data: 0.368457
I0804 10:22:42.074757 8318 net.cpp:638] [Forward] Layer loadtestdata, top blob label data: 0.514496
I0804 10:22:42.076117 8318 net.cpp:638] [Forward] Layer conv0, top blob conv0 data: 0.115678
I0804 10:22:42.076200 8318 net.cpp:650] [Forward] Layer conv0, param blob 0 data: 0.0455077
I0804 10:22:42.076273 8318 net.cpp:650] [Forward] Layer conv0, param blob 1 data: 0
I0804 10:22:42.076539 8318 net.cpp:638] [Forward] Layer relu0, top blob conv0 data: 0.0446758
I0804 10:22:42.078435 8318 net.cpp:638] [Forward] Layer conv1, top blob conv1 data: 0.0675479
I0804 10:22:42.078516 8318 net.cpp:650] [Forward] Layer conv1, param blob 0 data: 0.0470226
I0804 10:22:42.078589 8318 net.cpp:650] [Forward] Layer conv1, param blob 1 data: 0
I0804 10:22:42.079108 8318 net.cpp:638] [Forward] Layer bnorm1, top blob conv1 data: 0
I0804 10:22:42.079197 8318 net.cpp:650] [Forward] Layer bnorm1, param blob 0 data: 0
I0804 10:22:42.079270 8318 net.cpp:650] [Forward] Layer bnorm1, param blob 1 data: 0
I0804 10:22:42.079350 8318 net.cpp:650] [Forward] Layer bnorm1, param blob 2 data: 0
I0804 10:22:42.079421 8318 net.cpp:650] [Forward] Layer bnorm1, param blob 3 data: 0
I0804 10:22:42.079505 8318 net.cpp:650] [Forward] Layer bnorm1, param blob 4 data: 0
I0804 10:22:42.080267 8318 net.cpp:638] [Forward] Layer scale1, top blob conv1 data: 0
I0804 10:22:42.080345 8318 net.cpp:650] [Forward] Layer scale1, param blob 0 data: 1
I0804 10:22:42.080418 8318 net.cpp:650] [Forward] Layer scale1, param blob 1 data: 0
I0804 10:22:42.080651 8318 net.cpp:638] [Forward] Layer relu1, top blob conv1 data: 0
I0804 10:22:42.082074 8318 net.cpp:638] [Forward] Layer conv16, top blob conv16 data: 0
I0804 10:22:42.082154 8318 net.cpp:650] [Forward] Layer conv16, param blob 0 data: 0.0485365
I0804 10:22:42.082226 8318 net.cpp:650] [Forward] Layer conv16, param blob 1 data: 0
I0804 10:22:42.082675 8318 net.cpp:638] [Forward] Layer loss, top blob loss data: 42.0327
Without BN:
I0803 17:01:29.700850 30274 net.cpp:638] [Forward] Layer loadtestdata, top blob data data: 0.320584
I0803 17:01:29.700920 30274 net.cpp:638] [Forward] Layer loadtestdata, top blob label data: 0.236383
I0803 17:01:29.701556 30274 net.cpp:638] [Forward] Layer conv0, top blob conv0 data: 0.106141
I0803 17:01:29.701633 30274 net.cpp:650] [Forward] Layer conv0, param blob 0 data: 0.0467062
I0803 17:01:29.701692 30274 net.cpp:650] [Forward] Layer conv0, param blob 1 data: 0
I0803 17:01:29.701835 30274 net.cpp:638] [Forward] Layer relu0, top blob conv0 data: 0.0547961
I0803 17:01:29.702193 30274 net.cpp:638] [Forward] Layer conv1, top blob conv1 data: 0.0716117
I0803 17:01:29.702267 30274 net.cpp:650] [Forward] Layer conv1, param blob 0 data: 0.0473551
I0803 17:01:29.702327 30274 net.cpp:650] [Forward] Layer conv1, param blob 1 data: 0
I0803 17:01:29.702425 30274 net.cpp:638] [Forward] Layer relu1, top blob conv1 data: 0.0318472
I0803 17:01:29.702781 30274 net.cpp:638] [Forward] Layer conv16, top blob conv16 data: 0.0403702
I0803 17:01:29.702847 30274 net.cpp:650] [Forward] Layer conv16, param blob 0 data: 0.0474007
I0803 17:01:29.702908 30274 net.cpp:650] [Forward] Layer conv16, param blob 1 data: 0
I0803 17:01:29.703228 30274 net.cpp:638] [Forward] Layer loss, top blob loss data: 11.2245
it is strange that in forward, every layer starting with batchnorm gives 0!!!
Also it worth mentioned that Relu (in-place layer) have only 4 lines, but batchnorm and scale (supposed to be also in-place layers) have 6 and 3 lines in log file. Do you know what's the problem.

I don't know what's wrong with your "BatchNorm" layer, but it is very odd:
According to your debug log, your "BatchNorm" layer has 5 (!) internal param blobs (0..4). Looking at the source code of batch_norm_layer.cpp there should be only 3 internal param blobs:
this->blobs_.resize(3);
I suggest you make sure the implementation of "BatchNorm" you are using is not bugous.
About the debug log, you can read here more about how to interpret it.
To address your question
"Relu [...] have only 4 lines, but batchnorm and scale [...] have 6 and 3 lines in log file"
Note that each layer has one line for "top blob ... data" - reporting the L2 norm of the output blob.
Additionally, each layer has an extra line for each of its internal weights.
"ReLU" layer has no internal parameters, and thus no prints of "param blob [...] data" for this layer. "Convolution" layer has two internal params (kernels and bias), thus an extra two lines for blob 0 and blob 1.

Related

Caffe: classifier with probabilistic output greater than 1

I am trying to run Caffe on cifar-10 data using python. This is the end of my proto.txt (note: my deploy file does not have loss layer!)
...
layer {
name: "ampl"
type: "InnerProduct"
bottom: "maxPool1"
top: "ampl"
param {
lr_mult: 1
decay_mult: 1
}
inner_product_param {
num_output: 10
bias_term: false
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.2
}
}
}
layer {
name: "loss"
type: "Softmax"
bottom: "ampl"
bottom: "label"
top: "loss"
}
But when I look at my output probabilities, they are not [0 1].
This is also how I read my output label in test phase:
net = caffe.Net(modelFile, weightsFile, caffe.TEST)
# estimate amplitude
shape = (data.shape[0], net.blobs['ampl'].data.shape[1])
dtype = net.blobs['ampl'].data.dtype
aE = np.ndarray(shape,dtype)
for i in range(data.shape[0]):
net.blobs['data'].data[...] = data[i].reshape(net.blobs['data'].data.shape)
net.forward()
aE[i,:] = net.blobs['ampl'].data
This is the output for first 5 samples:
-0.8576 0 0 0 -1.2853
-1.1855 0 0 0 -0.3572
-2.2088 0 0 0 -2.6844
-1.2650 0 0 0 -3.8973
-1.2844 0 0 0 -3.8011
-1.5247 0 0 0 -3.9778
-1.6097 0 0 0 -3.7351
-1.0909 0 0 0 -3.6270
-1.3660 0 0 0 -0.4569
-1.0892 0 0 0 -0.2500
Seems you are outputting the "ampl" layer, which is a fully connected layer, only softmax will produce actual probabilities so you should output the layer that contains the softmax operation (without the loss, just softmax).
You could also output the last layer and apply softmax manually.

FCN32 Model is not converging and loss is fluctuating after some points. Why?

I am trying to train fcn32. I am training voc-fcn32s model for my own data that has imbalanced class number. This is the learning curve for 18,000 iterations:
As you can see training is diminishing in some points and then it is fluctuating. I read some online recommendations that they are suggesting reducing the learning rate or changing the bias value in convolution layers for fillers. So, what I did, is that I changed the train_val.prototxt as follows for these two layers:
....
layer {
name: "score_fr"
type: "Convolution"
bottom: "fc7"
top: "score_fr"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 5 # the number of classes
pad: 0
kernel_size: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.5 #+
}
}
}
layer {
name: "upscore"
type: "Deconvolution"
bottom: "score_fr"
top: "upscore"
param {
lr_mult: 0
}
convolution_param {
num_output: 5 # the number of classes
bias_term: true #false
kernel_size: 64
stride: 32
group: 5 #2
weight_filler: {
type: "bilinear"
value:0.5 #+
}
}
}
....
And this the trend of model
It seems not much thing has changed in the behavior of the model.
1) Am doing the right way to add these values to weight_filler?
2) Should I change the learning policy in the solver from fixed to step by reducing by the factor of 10 each time? Will it help to tackle this issue?
I am worried that I am doing the wrong things and my model cannot converge. Does anyone have any suggestion about this? What important things I should consider while training model? What kind of changes can I do on solver and train_val that model to be converged?
I really appreciate your help.
More details after adding BatchNorm layer
Thanks #Shai and #Jonathan for suggesting to add batchNorm layers.
I added Batch Normalization Layers before reLU layers, this one sample layer:
layer {
name: "conv1_1"
type: "Convolution"
bottom: "data"
top: "conv1_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 100
kernel_size: 3
stride: 1
}
}
layer {
name: "bn1_1"
type: "BatchNorm"
bottom: "conv1_1"
top: "bn1_1"
batch_norm_param {
use_global_stats: false
}
param {
lr_mult: 0
}
include {
phase: TRAIN
}
}
layer {
name: "bn1_1"
type: "BatchNorm"
bottom: "conv1_1"
top: "bn1_1"
batch_norm_param {
use_global_stats: true
}
param {
lr_mult: 0
}
include {
phase: TEST
}
}
layer {
name: "scale1_1"
type: "Scale"
bottom: "bn1_1"
top: "bn1_1"
scale_param {
bias_term: true
}
}
layer {
name: "relu1_1"
type: "ReLU"
bottom: "bn1_1"
top: "bn1_1"
}
layer {
name: "conv1_2"
type: "Convolution"
bottom: "bn1_1"
top: "conv1_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
stride: 1
}
}
As far as I knew from docs, I can only add one parameter in Batch normalization instead of three since I have single channel images. Is this my understanding true? as follows:
param {
lr_mult: 0
}
Should I add more parameters to scale layer, as documentation is mentioning? What are the meaning of these parameters in Scale layer? like:
layer { bottom: 'layerx-bn' top: 'layerx-bn' name: 'layerx-bn-scale' type: 'Scale',
scale_param {
bias_term: true
axis: 1 # scale separately for each channel
num_axes: 1 # ... but not spatially (default)
filler { type: 'constant' value: 1 } # initialize scaling to 1
bias_filler { type: 'constant' value: 0.001 } # initialize bias
}}
and this is of the network. I am not sure how much I am wrong/right. Have I added correctly?
The other question is about debug_info. What is the meaning of these lines of log file after activating debug_info? What does it mean of diff and data? And why the values are 0? Is my net working correctly?
I0123 23:17:49.498327 15230 solver.cpp:228] Iteration 50, loss = 105465
I0123 23:17:49.498337 15230 solver.cpp:244] Train net output #0: accuracy = 0.643982
I0123 23:17:49.498349 15230 solver.cpp:244] Train net output #1: loss = 105446 (* 1 = 105446 loss)
I0123 23:17:49.498359 15230 sgd_solver.cpp:106] Iteration 50, lr = 1e-11
I0123 23:19:12.680325 15230 net.cpp:608] [Forward] Layer data, top blob data data: 34.8386
I0123 23:19:12.680615 15230 net.cpp:608] [Forward] Layer data_data_0_split, top blob data_data_0_split_0 data: 34.8386
I0123 23:19:12.680670 15230 net.cpp:608] [Forward] Layer data_data_0_split, top blob data_data_0_split_1 data: 34.8386
I0123 23:19:12.680778 15230 net.cpp:608] [Forward] Layer label, top blob label data: 0
I0123 23:19:12.680829 15230 net.cpp:608] [Forward] Layer label_label_0_split, top blob label_label_0_split_0 data: 0
I0123 23:19:12.680896 15230 net.cpp:608] [Forward] Layer label_label_0_split, top blob label_label_0_split_1 data: 0
I0123 23:19:12.688591 15230 net.cpp:608] [Forward] Layer conv1_1, top blob conv1_1 data: 0
I0123 23:19:12.688695 15230 net.cpp:620] [Forward] Layer conv1_1, param blob 0 data: 0
I0123 23:19:12.688742 15230 net.cpp:620] [Forward] Layer conv1_1, param blob 1 data: 0
I0123 23:19:12.721791 15230 net.cpp:608] [Forward] Layer bn1_1, top blob bn1_1 data: 0
I0123 23:19:12.721853 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 0 data: 0
I0123 23:19:12.721890 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 1 data: 0
I0123 23:19:12.721901 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 2 data: 96.1127
I0123 23:19:12.996196 15230 net.cpp:620] [Forward] Layer scale4_1, param blob 0 data: 1
I0123 23:19:12.996237 15230 net.cpp:620] [Forward] Layer scale4_1, param blob 1 data: 0
I0123 23:19:12.996939 15230 net.cpp:608] [Forward] Layer relu4_1, top blob bn4_1 data: 0
I0123 23:19:13.012020 15230 net.cpp:608] [Forward] Layer conv4_2, top blob conv4_2 data: 0
I0123 23:19:13.012403 15230 net.cpp:620] [Forward] Layer conv4_2, param blob 0 data: 0
I0123 23:19:13.012446 15230 net.cpp:620] [Forward] Layer conv4_2, param blob 1 data: 0
I0123 23:19:13.015959 15230 net.cpp:608] [Forward] Layer bn4_2, top blob bn4_2 data: 0
I0123 23:19:13.016005 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 0 data: 0
I0123 23:19:13.016046 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 1 data: 0
I0123 23:19:13.016054 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 2 data: 96.1127
I0123 23:19:13.017211 15230 net.cpp:608] [Forward] Layer scale4_2, top blob bn4_2 data: 0
I0123 23:19:13.017251 15230 net.cpp:620] [Forward] Layer scale4_2, param blob 0 data: 1
I0123 23:19:13.017292 15230 net.cpp:620] [Forward] Layer scale4_2, param blob 1 data: 0
I0123 23:19:13.017980 15230 net.cpp:608] [Forward] Layer relu4_2, top blob bn4_2 data: 0
I0123 23:19:13.032080 15230 net.cpp:608] [Forward] Layer conv4_3, top blob conv4_3 data: 0
I0123 23:19:13.032452 15230 net.cpp:620] [Forward] Layer conv4_3, param blob 0 data: 0
I0123 23:19:13.032493 15230 net.cpp:620] [Forward] Layer conv4_3, param blob 1 data: 0
I0123 23:19:13.036018 15230 net.cpp:608] [Forward] Layer bn4_3, top blob bn4_3 data: 0
I0123 23:19:13.036064 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 0 data: 0
I0123 23:19:13.036105 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 1 data: 0
I0123 23:19:13.036114 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 2 data: 96.1127
I0123 23:19:13.038148 15230 net.cpp:608] [Forward] Layer scale4_3, top blob bn4_3 data: 0
I0123 23:19:13.038189 15230 net.cpp:620] [Forward] Layer scale4_3, param blob 0 data: 1
I0123 23:19:13.038230 15230 net.cpp:620] [Forward] Layer scale4_3, param blob 1 data: 0
I0123 23:19:13.038969 15230 net.cpp:608] [Forward] Layer relu4_3, top blob bn4_3 data: 0
I0123 23:19:13.039417 15230 net.cpp:608] [Forward] Layer pool4, top blob pool4 data: 0
I0123 23:19:13.043354 15230 net.cpp:608] [Forward] Layer conv5_1, top blob conv5_1 data: 0
I0123 23:19:13.128515 15230 net.cpp:608] [Forward] Layer score_fr, top blob score_fr data: 0.000975524
I0123 23:19:13.128569 15230 net.cpp:620] [Forward] Layer score_fr, param blob 0 data: 0.0135222
I0123 23:19:13.128607 15230 net.cpp:620] [Forward] Layer score_fr, param blob 1 data: 0.000975524
I0123 23:19:13.129696 15230 net.cpp:608] [Forward] Layer upscore, top blob upscore data: 0.000790174
I0123 23:19:13.129734 15230 net.cpp:620] [Forward] Layer upscore, param blob 0 data: 0.25
I0123 23:19:13.130656 15230 net.cpp:608] [Forward] Layer score, top blob score data: 0.000955503
I0123 23:19:13.130709 15230 net.cpp:608] [Forward] Layer score_score_0_split, top blob score_score_0_split_0 data: 0.000955503
I0123 23:19:13.130754 15230 net.cpp:608] [Forward] Layer score_score_0_split, top blob score_score_0_split_1 data: 0.000955503
I0123 23:19:13.146767 15230 net.cpp:608] [Forward] Layer accuracy, top blob accuracy data: 1
I0123 23:19:13.148967 15230 net.cpp:608] [Forward] Layer loss, top blob loss data: 105320
I0123 23:19:13.149173 15230 net.cpp:636] [Backward] Layer loss, bottom blob score_score_0_split_1 diff: 0.319809
I0123 23:19:13.149323 15230 net.cpp:636] [Backward] Layer score_score_0_split, bottom blob score diff: 0.319809
I0123 23:19:13.150310 15230 net.cpp:636] [Backward] Layer score, bottom blob upscore diff: 0.204677
I0123 23:19:13.152452 15230 net.cpp:636] [Backward] Layer upscore, bottom blob score_fr diff: 253.442
I0123 23:19:13.153218 15230 net.cpp:636] [Backward] Layer score_fr, bottom blob bn7 diff: 9.20469
I0123 23:19:13.153254 15230 net.cpp:647] [Backward] Layer score_fr, param blob 0 diff: 0
I0123 23:19:13.153291 15230 net.cpp:647] [Backward] Layer score_fr, param blob 1 diff: 20528.8
I0123 23:19:13.153420 15230 net.cpp:636] [Backward] Layer drop7, bottom blob bn7 diff: 9.21666
I0123 23:19:13.153554 15230 net.cpp:636] [Backward] Layer relu7, bottom blob bn7 diff: 0
I0123 23:19:13.153856 15230 net.cpp:636] [Backward] Layer scale7, bottom blob bn7 diff: 0
E0123 23:19:14.382714 15230 net.cpp:736] [Backward] All net params (data, diff): L1 norm = (19254.6, 102644); L2 norm = (391.485, 57379.6)
I really appreciate if someone knows, please share ideas/links/resources here. Thanks again
I would not expect changing the bias values to help with the training. The first thing I would try is lowering the learning rate. You can do this manually by retraining the the weights that have reached a plateau and use a solver with a lower base_lr. Or you can change your solver.prototxt to use a different update policy. You can either set the method to step or you can use an update policy such as Adam. See:
http://caffe.berkeleyvision.org/tutorial/solver.html
As #Shai recommends, adding "BatchNorm" layers should help. Batch Normalization is similar to "whitening"/normalizing the input data, but is applied to the intermediate layers. The paper on Batch normalization is on arxiv.
You should also reserve some data for validation. Just looking at the training loss can be misleading.
Regarding "BatchNorm" parameters:
This layer has three internal parameters: (0) mean, (1) variance, and (2) moving average factor, regardless of the number of channels or the shape of your blob. Therefore, if you wish to explicitly set lr_mult you need to define it for all three.
Regarding the zeros in the log:
Please read this post on how to read caffe's debug log.
It seems like you are training your model from scratch (not fine tuning) and that all weights are set to zero. This is a very poor initialization strategy.
Please consider defining fillers and bias_fillers for initializing the weights.

Writing TIFF Appears Black But imagesc does not MATLAB

I want to write a tiff again after having read it into MATLAB. The tiff files were generated by some other software:
[flnm,locn]=uigetfile({'*.tif','Image files'}, 'Select an image');
fp=fullfile(locn,flnm);
I = double(imread(fp));
info = imfinfo(fp);
.
.
.
J2 = im2int16(J2);
J2(:,:) = uint16((J2(:,:)./max(max(J2(:,:),[],1)))*65536);
T = Tiff((fullfile(strcat(locn,v_f), (sprintf('%d.tif',i)))), 'w');
tagstruct.ImageLength = size(J2, 1);
tagstruct.ImageWidth = size(J2, 2);
tagstruct.Compression = Tiff.Compression.None;
tagstruct.SampleFormat = Tiff.SampleFormat.Int;
tagstruct.Photometric = Tiff.Photometric.MinIsBlack;
tagstruct.BitsPerSample = info.BitsPerSample; % 32;
tagstruct.SamplesPerPixel = info.SamplesPerPixel; % 1;
tagstruct.PlanarConfiguration = Tiff.PlanarConfiguration.Chunky;
T.setTag(tagstruct);
T.write(J2);
T.close();
This line which is supposed to rescale the intensity range:
J2(:,:) = uint16((J2(:,:)./max(max(J2(:,:),[],1)))*65536);
was borrowed from here: https://stackoverflow.com/a/19949536/3319527. But my images still come out as black. This should not be the case because with imagesc though the signals are quite noisy but the data output to the screen makes sense but the tiffs I write to file are quite useless.
Image Parameters:
Filename: [1x172 char]
FileModDate: '24-Jan-2014 14:39:09'
FileSize: 32003
Format: 'tif'
FormatVersion: []
Width: 125
Height: 125
BitDepth: 16
ColorType: 'grayscale'
FormatSignature: [73 73 42 0]
ByteOrder: 'little-endian'
NewSubFileType: 0
BitsPerSample: 16
Compression: 'Uncompressed'
PhotometricInterpretation: 'BlackIsZero'
StripOffsets: [8 2008 4008 6008 8008 10008 12008 14008 16008 18008 20008 22008 24008 26008 28008 30008]
SamplesPerPixel: 1
RowsPerStrip: 8
StripByteCounts: [2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 1250]
XResolution: 72
YResolution: 72
ResolutionUnit: 'Inch'
Colormap: []
PlanarConfiguration: 'Chunky'
TileWidth: []
TileLength: []
TileOffsets: []
TileByteCounts: []
Orientation: 1
FillOrder: 1
GrayResponseUnit: 0.0100
MaxSampleValue: 65535
MinSampleValue: 0
Thresholding: 1
Offset: 31768
ImageDescription: '
From tiff info:
TIFF File: '000001.tif'
Mode: 'r'
Current Image Directory: 1
Number Of Strips: 16
SubFileType: Tiff.SubFileType.Default
Photometric: Tiff.Photometric.MinIsBlack
ImageLength: 125
ImageWidth: 125
RowsPerStrip: 8
BitsPerSample: 16
Compression: Tiff.Compression.None
SampleFormat: Tiff.SampleFormat.UInt
SamplesPerPixel: 1
PlanarConfiguration: Tiff.PlanarConfiguration.Chunky
ImageDescription:
This is frame #0000
Orientation: Tiff.Orientation.TopLeft
Thanks!

Matlab only opens first frame of multi-page tiff stack

I've created multi-page tiff files with a macro in ImageJ, and I'm now trying to open it using matlab, but I can only access the first frame.
Here is the result of imfinfo(filename). Accordingly, I get
length(imfinfo(filename)) = 1
Filename: [1x129 char]
FileModDate: '28-nov-2013 12:27:51'
FileSize: 6.7905e+09
Format: 'tif'
FormatVersion: []
Width: 512
Height: 512
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: [77 77 0 42]
ByteOrder: 'big-endian'
NewSubFileType: 0
BitsPerSample: 8
Compression: 'Uncompressed'
PhotometricInterpretation: 'BlackIsZero'
StripOffsets: 932625
SamplesPerPixel: 1
RowsPerStrip: 512
StripByteCounts: 262144
XResolution: []
YResolution: []
ResolutionUnit: 'None'
Colormap: []
PlanarConfiguration: 'Chunky'
TileWidth: []
TileLength: []
TileOffsets: []
TileByteCounts: []
Orientation: 1
FillOrder: 1
GrayResponseUnit: 0.0100
MaxSampleValue: 255
MinSampleValue: 0
Thresholding: 1
Offset: 8
ImageDescription: 'ImageJ=1.47q
images=25900
slices=25900
loop=false
However if I open the same tif file in ImageJ, then I can read and scroll through the 25900 frames...The weird thing is that matlab can read previous multipage tiff I had created in imageJ without my batch macro...
I don't understand what's happening...any help would be greatly appreciated !
Thanks,
Steven
This is actually ImageJ's fault. For large TIFFs, instead of using the BigTiff standard to save the stack, ImageJ instead saves a raw file with a fake TIFF header containing the first frame, and happily names it .tif. You can discuss with the ImageJ developers why they think this is a good idea.
To read these stacks into Matlab, you can use memmapfile or MappedTensor.
You have to loop over all the images in the stack:
fname = 'my_file_with_lots_of_images.tif';
info = imfinfo(fname);
imageStack = [];
numberOfImages = length(info);
for k = 1:numberOfImages
currentImage = imread(fname, k, 'Info', info);
imageStack(:,:,k) = currentImage;
end

about tiff image

imfinfo of my image gives the following:
Filename: 'drosophila.tif'
FileModDate: '10-Nov-2009 18:52:42'
FileSize: 264768
Format: 'tif'
FormatVersion: []
Width: 512
Height: 512
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: [73 73 42 0]
ByteOrder: 'little-endian'
NewSubFileType: 0
BitsPerSample: 8
Compression: 'PackBits'
PhotometricInterpretation: 'BlackIsZero'
StripOffsets: [32x1 double]
SamplesPerPixel: 1
RowsPerStrip: 16
StripByteCounts: [32x1 double]
XResolution: 72
YResolution: 72
ResolutionUnit: 'Inch'
Colormap: []
PlanarConfiguration: 'Chunky'
TileWidth: []
TileLength: []
TileOffsets: []
TileByteCounts: []
Orientation: 1
FillOrder: 1
GrayResponseUnit: 0.0100
MaxSampleValue: 255
MinSampleValue: 0
Thresholding: 1
Offset: 264322
how many strips are there?
generic logic:
ceil(Height/RowsPerStrip)
The TIFF specifications states that the last strip need not be full (hence the CEIL call).
Or, the length of the StripOffsets from the info structure. As the name implies, this is a vector of byte offsets to each strip in the file (so there has to be one offset per strip).
32.
Height: 512
RowsPerStrip: 16
512 = 2^9; 16=2^4. Divide to get 2^5 which is 32.