Extracting layers in Keras model - neural-network

The following is the model.summary() output of a keras model. I am trying to extract a sub-model from this, that starts from the output of layer flatten_1 and ends at the output of the last layer (dense_3). Is it possible? If yes, What would be the Keras syntax?
zero_padding2d_1 (ZeroPaddin (None, 226, 226, 3) 0
conv2d_1 (Conv2D) (None, 224, 224, 64) 1792
max_pooling2d_1 (MaxPooling2 (None, 112, 112, 64) 0
zero_padding2d_2 (ZeroPaddin (None, 114, 114, 64) 0
conv2d_2 (Conv2D) (None, 112, 112, 128) 73856
max_pooling2d_2 (MaxPooling2 (None, 56, 56, 128) 0
zero_padding2d_3 (ZeroPaddin (None, 58, 58, 128) 0
conv2d_3 (Conv2D) (None, 56, 56, 256) 295168
max_pooling2d_3 (MaxPooling2 (None, 28, 28, 256) 0
flatten_1 (Flatten) (None, 200704) 0
dense_1 (Dense) (None, 64) 12845120
dropout_1 (Dropout) (None, 64) 0
dense_2 (Dense) (None, 256) 16640
dropout_2 (Dropout) (None, 256) 0
dense_3 (Dense) (None, 2) 514

Related

facenet triplet loss with keras

I am trying to implement facenet in Keras with Tensorflow backend and I have some problem with the triplet loss.
I call the fit function with 3*n number of images and then I define my custom loss function as follows:
def triplet_loss(self, y_true, y_pred):
embeddings = K.reshape(y_pred, (-1, 3, output_dim))
positive_distance = K.mean(K.square(embeddings[:,0] - embeddings[:,1]),axis=-1)
negative_distance = K.mean(K.square(embeddings[:,0] - embeddings[:,2]),axis=-1)
return K.mean(K.maximum(0.0, positive_distance - negative_distance + _alpha))
self._model.compile(loss=triplet_loss, optimizer="sgd")
self._model.fit(x=x,y=y,nb_epoch=1, batch_size=len(x))
where y is just a dummy array filled with 0s
The problem is that even after the first iteration with batch size 20 the model starts predicting the same embedding for all the images. So when I first do the prediction on the batch every embedding is different. Then I do the fit and predict again and suddenly all the embeddings becomes almost the same for all the images in the batch
Also notice that there is a Lambda layer at the end of the model. It normalizes the output of the net so all the embeddings has a unit length as it was suggested in the face net study.
Can anybody help me out here?
Model summary
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 112, 112, 64) 9472 input_1[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNormal(None, 112, 112, 64) 128 convolution2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 56, 56, 64) 4160 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
batchnormalization_2 (BatchNormal(None, 56, 56, 64) 128 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 56, 56, 192) 110784 batchnormalization_2[0][0]
____________________________________________________________________________________________________
batchnormalization_3 (BatchNormal(None, 56, 56, 192) 384 convolution2d_3[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 28, 28, 192) 0 batchnormalization_3[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 28, 28, 96) 18528 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 28, 28, 16) 3088 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 28, 28, 192) 0 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 28, 28, 64) 12352 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 28, 28, 128) 110720 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 28, 28, 32) 12832 convolution2d_7[0][0]
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D) (None, 28, 28, 32) 6176 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
merge_1 (Merge) (None, 28, 28, 256) 0 convolution2d_4[0][0]
convolution2d_6[0][0]
convolution2d_8[0][0]
convolution2d_9[0][0]
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 28, 28, 96) 24672 merge_1[0][0]
____________________________________________________________________________________________________
convolution2d_13 (Convolution2D) (None, 28, 28, 32) 8224 merge_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_4 (MaxPooling2D) (None, 28, 28, 256) 0 merge_1[0][0]
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 28, 28, 64) 16448 merge_1[0][0]
____________________________________________________________________________________________________
convolution2d_12 (Convolution2D) (None, 28, 28, 128) 110720 convolution2d_11[0][0]
____________________________________________________________________________________________________
convolution2d_14 (Convolution2D) (None, 28, 28, 64) 51264 convolution2d_13[0][0]
____________________________________________________________________________________________________
convolution2d_15 (Convolution2D) (None, 28, 28, 64) 16448 maxpooling2d_4[0][0]
____________________________________________________________________________________________________
merge_2 (Merge) (None, 28, 28, 320) 0 convolution2d_10[0][0]
convolution2d_12[0][0]
convolution2d_14[0][0]
convolution2d_15[0][0]
____________________________________________________________________________________________________
convolution2d_16 (Convolution2D) (None, 28, 28, 128) 41088 merge_2[0][0]
____________________________________________________________________________________________________
convolution2d_18 (Convolution2D) (None, 28, 28, 32) 10272 merge_2[0][0]
____________________________________________________________________________________________________
convolution2d_17 (Convolution2D) (None, 14, 14, 256) 295168 convolution2d_16[0][0]
____________________________________________________________________________________________________
convolution2d_19 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_18[0][0]
____________________________________________________________________________________________________
maxpooling2d_5 (MaxPooling2D) (None, 14, 14, 320) 0 merge_2[0][0]
____________________________________________________________________________________________________
merge_3 (Merge) (None, 14, 14, 640) 0 convolution2d_17[0][0]
convolution2d_19[0][0]
maxpooling2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_21 (Convolution2D) (None, 14, 14, 96) 61536 merge_3[0][0]
____________________________________________________________________________________________________
convolution2d_23 (Convolution2D) (None, 14, 14, 32) 20512 merge_3[0][0]
____________________________________________________________________________________________________
maxpooling2d_6 (MaxPooling2D) (None, 14, 14, 640) 0 merge_3[0][0]
____________________________________________________________________________________________________
convolution2d_20 (Convolution2D) (None, 14, 14, 256) 164096 merge_3[0][0]
____________________________________________________________________________________________________
convolution2d_22 (Convolution2D) (None, 14, 14, 192) 166080 convolution2d_21[0][0]
____________________________________________________________________________________________________
convolution2d_24 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_23[0][0]
____________________________________________________________________________________________________
convolution2d_25 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_6[0][0]
____________________________________________________________________________________________________
merge_4 (Merge) (None, 14, 14, 640) 0 convolution2d_20[0][0]
convolution2d_22[0][0]
convolution2d_24[0][0]
convolution2d_25[0][0]
____________________________________________________________________________________________________
convolution2d_27 (Convolution2D) (None, 14, 14, 112) 71792 merge_4[0][0]
____________________________________________________________________________________________________
convolution2d_29 (Convolution2D) (None, 14, 14, 32) 20512 merge_4[0][0]
____________________________________________________________________________________________________
maxpooling2d_7 (MaxPooling2D) (None, 14, 14, 640) 0 merge_4[0][0]
____________________________________________________________________________________________________
convolution2d_26 (Convolution2D) (None, 14, 14, 224) 143584 merge_4[0][0]
____________________________________________________________________________________________________
convolution2d_28 (Convolution2D) (None, 14, 14, 224) 226016 convolution2d_27[0][0]
____________________________________________________________________________________________________
convolution2d_30 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_29[0][0]
____________________________________________________________________________________________________
convolution2d_31 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_7[0][0]
____________________________________________________________________________________________________
merge_5 (Merge) (None, 14, 14, 640) 0 convolution2d_26[0][0]
convolution2d_28[0][0]
convolution2d_30[0][0]
convolution2d_31[0][0]
____________________________________________________________________________________________________
convolution2d_33 (Convolution2D) (None, 14, 14, 128) 82048 merge_5[0][0]
____________________________________________________________________________________________________
convolution2d_35 (Convolution2D) (None, 14, 14, 32) 20512 merge_5[0][0]
____________________________________________________________________________________________________
maxpooling2d_8 (MaxPooling2D) (None, 14, 14, 640) 0 merge_5[0][0]
____________________________________________________________________________________________________
convolution2d_32 (Convolution2D) (None, 14, 14, 192) 123072 merge_5[0][0]
____________________________________________________________________________________________________
convolution2d_34 (Convolution2D) (None, 14, 14, 256) 295168 convolution2d_33[0][0]
____________________________________________________________________________________________________
convolution2d_36 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_35[0][0]
____________________________________________________________________________________________________
convolution2d_37 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_8[0][0]
____________________________________________________________________________________________________
merge_6 (Merge) (None, 14, 14, 640) 0 convolution2d_32[0][0]
convolution2d_34[0][0]
convolution2d_36[0][0]
convolution2d_37[0][0]
____________________________________________________________________________________________________
convolution2d_39 (Convolution2D) (None, 14, 14, 144) 92304 merge_6[0][0]
____________________________________________________________________________________________________
convolution2d_41 (Convolution2D) (None, 14, 14, 32) 20512 merge_6[0][0]
____________________________________________________________________________________________________
maxpooling2d_9 (MaxPooling2D) (None, 14, 14, 640) 0 merge_6[0][0]
____________________________________________________________________________________________________
convolution2d_38 (Convolution2D) (None, 14, 14, 160) 102560 merge_6[0][0]
____________________________________________________________________________________________________
convolution2d_40 (Convolution2D) (None, 14, 14, 288) 373536 convolution2d_39[0][0]
____________________________________________________________________________________________________
convolution2d_42 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_41[0][0]
____________________________________________________________________________________________________
convolution2d_43 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_9[0][0]
____________________________________________________________________________________________________
merge_7 (Merge) (None, 14, 14, 640) 0 convolution2d_38[0][0]
convolution2d_40[0][0]
convolution2d_42[0][0]
convolution2d_43[0][0]
____________________________________________________________________________________________________
convolution2d_44 (Convolution2D) (None, 14, 14, 160) 102560 merge_7[0][0]
____________________________________________________________________________________________________
convolution2d_46 (Convolution2D) (None, 14, 14, 64) 41024 merge_7[0][0]
____________________________________________________________________________________________________
convolution2d_45 (Convolution2D) (None, 7, 7, 256) 368896 convolution2d_44[0][0]
____________________________________________________________________________________________________
convolution2d_47 (Convolution2D) (None, 7, 7, 128) 204928 convolution2d_46[0][0]
____________________________________________________________________________________________________
maxpooling2d_10 (MaxPooling2D) (None, 7, 7, 640) 0 merge_7[0][0]
____________________________________________________________________________________________________
merge_8 (Merge) (None, 7, 7, 1024) 0 convolution2d_45[0][0]
convolution2d_47[0][0]
maxpooling2d_10[0][0]
____________________________________________________________________________________________________
convolution2d_49 (Convolution2D) (None, 7, 7, 192) 196800 merge_8[0][0]
____________________________________________________________________________________________________
convolution2d_51 (Convolution2D) (None, 7, 7, 48) 49200 merge_8[0][0]
____________________________________________________________________________________________________
maxpooling2d_11 (MaxPooling2D) (None, 7, 7, 1024) 0 merge_8[0][0]
____________________________________________________________________________________________________
convolution2d_48 (Convolution2D) (None, 7, 7, 384) 393600 merge_8[0][0]
____________________________________________________________________________________________________
convolution2d_50 (Convolution2D) (None, 7, 7, 384) 663936 convolution2d_49[0][0]
____________________________________________________________________________________________________
convolution2d_52 (Convolution2D) (None, 7, 7, 128) 153728 convolution2d_51[0][0]
____________________________________________________________________________________________________
convolution2d_53 (Convolution2D) (None, 7, 7, 128) 131200 maxpooling2d_11[0][0]
____________________________________________________________________________________________________
merge_9 (Merge) (None, 7, 7, 1024) 0 convolution2d_48[0][0]
convolution2d_50[0][0]
convolution2d_52[0][0]
convolution2d_53[0][0]
____________________________________________________________________________________________________
convolution2d_55 (Convolution2D) (None, 7, 7, 192) 196800 merge_9[0][0]
____________________________________________________________________________________________________
convolution2d_57 (Convolution2D) (None, 7, 7, 48) 49200 merge_9[0][0]
____________________________________________________________________________________________________
maxpooling2d_12 (MaxPooling2D) (None, 7, 7, 1024) 0 merge_9[0][0]
____________________________________________________________________________________________________
convolution2d_54 (Convolution2D) (None, 7, 7, 384) 393600 merge_9[0][0]
____________________________________________________________________________________________________
convolution2d_56 (Convolution2D) (None, 7, 7, 384) 663936 convolution2d_55[0][0]
____________________________________________________________________________________________________
convolution2d_58 (Convolution2D) (None, 7, 7, 128) 153728 convolution2d_57[0][0]
____________________________________________________________________________________________________
convolution2d_59 (Convolution2D) (None, 7, 7, 128) 131200 maxpooling2d_12[0][0]
____________________________________________________________________________________________________
merge_10 (Merge) (None, 7, 7, 1024) 0 convolution2d_54[0][0]
convolution2d_56[0][0]
convolution2d_58[0][0]
convolution2d_59[0][0]
____________________________________________________________________________________________________
averagepooling2d_1 (AveragePoolin(None, 1, 1, 1024) 0 merge_10[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1024) 0 averagepooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 128) 131200 flatten_1[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda) (None, 128) 0 dense_1[0][0]
====================================================================================================
Total params: 7456944
____________________________________________________________________________________________________
None
What could have happened, other than the learning rate was simply too high, was that an unstable triplet selection strategy had been used, effectively. If, for example, you only use 'hard triplets' (triplets where the a-n distance is smaller than the a-p distance), your network weights might collapse all embeddings to a single point (making the loss always equal to margin (your _alpha), because all embedding distances are zero).
This can be fixed by using other kinds of triplets as well (like 'semi-hard triplets' where a-p is smaller than a-n, but the distance between a-p and a-n is still smaller than margin). So maybe if you always checked for this... It is explained in more detail in this blog post: https://omoindrot.github.io/triplet-loss
Are you constraining your embeddings to "be on a d-dimensional hypersphere"? Try running tf.nn.l2_normalize on your embeddings right after they come out of the CNN.
The problem could be that the embeddings are sort of being smart-alecs. One easy way to reduce the loss is to just set everything to zero. l2_normalize forces them to be unit length.
It looks you'll want to add the normalizing right after the last average pool.
I have met the same problem, and I did some research work. I think it is because triplet loss needs multiple inputs, which may cause the network to generate outputs like that. I haven't fixed the problem yet, but you can check the issue page of keras for more details https://github.com/keras-team/keras/issues/9498.
In the issue above, I implemented a fake dataset and a fake triplet loss to reproduce the problem, after I changed the input structure of the network, the loss became normal.
the loss function in tensorflow requires a list of labels, i.e. list of integers. I think you are passing a 2D matrix, i.e. one hot encoding.
Try this
import keras.backend as K
from tf.contrib.losses.metric_learning import triplet_semihard_loss
def loss(y_true, y_pred):
y_true = K.argmax(y_true, axis = -1)
return triplet_semihard_loss(labels=y_true, embeddings=y_pred, margin=1.)

I have a matrix 12*4, and I need to subtract the 3rd column elements of rows that are different

I have a 12x4 matrix in MATLAB,
A =[-1, 3, 152, 41.5 ;
3, 9, 152, 38.7 ;
9, 16, 152, 38.7 ;
16, 23, 129, 53.5 ;
23, 29, 129, 53.5 ;
29, 30, 100, 100 ;
30, 30.5, 83, 83 ;
30.5, 31, 83, 83 ;
31, 35, 83, 83 ;
35, 41, 129, 53.5 ;
41, 48, 129, 53.5 ;
48, 55, 152, 38.7 ] ;
and I need to find the changes in the rows by subtracting the 3rd column element of the 2nd row from the previous row 3rd column element if they are different else go to the 3rd row if the same.
The answer should be in the form:
B = [16, 23;
29, 29;
30, 17;
35, 46;
48, 23]
For example, the 3rd and the 4th row 3rd column elements are different, so if subtracted i got 23. Output B 1st column element will consist of the 4th row first column element.
%Given matrix
A =[-1, 3, 152, 41.5 ;
3, 9, 152, 38.7 ;
9, 16, 152, 38.7 ;
16, 23, 129, 53.5 ;
23, 29, 129, 53.5 ;
29, 30, 100, 100 ;
30, 30.5, 83, 83 ;
30.5, 31, 83, 83 ;
31, 35, 83, 83 ;
35, 41, 129, 53.5 ;
41, 48, 129, 53.5 ;
48, 55, 152, 38.7 ] ;
B=A(:,2:3); %Taking out the columns of our interest
B = B([diff(B(:,2))~=0; true],:); %Storing only those rows whose consecutive elements in the third column of A are different
B=[B(1:end-1,1) abs(diff(B(:,2)))] % First column is according to your condition and second column is the difference

numpy: random select K items from total M(M>K) items?

Is there some handy implementation of Matlab function randperm in numpy that random select K items from totally M(M>K) items, and return the selected indice?
In Matlab,
randperm(100,10)
ans =
82 90 13 89 61 10 27 51 97 88
Yes, with the numpy.random.choice function.
>>> numpy.random.choice(100, 10, replace=False)
array([89, 99, 27, 39, 80, 31, 6, 0, 40, 93])
Note that the resulting range is 0 to M-1. If you need 1 to M like MATLAB, add 1 to the result:
>>> numpy.random.choice(100, 10, replace=False) + 1
array([ 28, 23, 15, 90, 18, 65, 86, 100, 99, 1])

how to plot column vector data along y-axis in matlab?

I have this row/column vector.
grades = [90, 100, 80, 70, 75, 88, 98, 78, 86, 95, 100, 92, 29, 50];
plot(grades);
In MATLAB, I want to plot grade values along x-axis and indices (1-14) along y-axis. By deafult, indices are plotted along x-axis. How it can be achieved?
grades = [90, 100, 80, 70, 75, 88, 98, 78, 86, 95, 100, 92, 29, 50];
figure;
plot(1:length(grades),grades); % Indices along X
figure;
plot(grades,1:length(grades)); % Indices along Y
If you want to plot data in Matlab. You have to define data sets for all the axis which you are interested in.
In your case, define x-axis data and y-axis data.
So for example your Y axis data would be
grades = [90 100 80 70 75 88 98 78 86 95 100 92 29 50];
for your x data you can use the following.
X = 1:14;
then you have the following plot command
plot(x,grades)

Create an new DenseMatrix from an submatrix in Breeze using Scala

I've a DenseMatrix (original). I slice it to remove the last column (subset). After that I want to access the data in the subset. However, subset.data still points to the data in the old DenseMatrix (original). Any idea what I'm missing here and how to fix this ?
original: breeze.linalg.DenseMatrix[Int] =
1 200 3 0
10 201 4 0
111 200 0 100
150 195 0 160
200 190 0 150
scala> val numcols = original.cols
numcols: Int = 4
scala> val subset = original(::, 0 to numcols - 2)
subset: breeze.linalg.DenseMatrix[Int] =
1 200 3
10 201 4
111 200 0
150 195 0
200 190 0
scala> subset.data
res0: Array[Int] = Array(1, 10, 111, 150, 200, 200, 201, 200, 195, 190, 3, 4, 0, 0, 0, 0, 0, 100, 160, 150)
scala> subset.data.size
res1: Int = 20
Never mind I figured out one way of doing it.
by using the following
scala> subset.toDenseMatrix.data
res10: Array[Int] = Array(1, 10, 111, 150, 200, 200, 201, 200, 195, 190, 3, 4, 0, 0, 0)
scala> subset.toDenseMatrix.data.size
res11: Int = 15