Keras sample_weight array error - neural-network

I'm working on a brain lesion segmentation problem and I'm trying to implement a Unet with code inspired by: https://github.com/jocicmarko/ultrasound-nerve-segmentation
One of the issues I'm trying to overcome is class balance (lots more non-lesion voxels rather than lesion voxels). I tried using class_balance but that didn't work so now I'm trying to use sample_weight and that's also giving me all sorts of errors.
First thing I tried was to set sample_weight_mode to temporal and feed in a weight matrix of the same shape as my target data:
target_data.shape -> (n_samples,512 rows/pixels, 512 cols/pixels, 1 channel)
Weight_map.shape -> (n_samples,512 rows/pixels, 512 cols/pixels, 1 channel)
Output:
_ValueError: Found a sample_weight array with shape (100, 512, 512, 1). In order to use timestep-wise sample weighting, you should pass a 2D sample_weight array.*
Second thing I tried was to flatten the sample array so it would be of shape:
Weight_map.shape -> (n_samples,512x512x1).
Output:
ValueError: Found a sample_weight array with shape (100, 262144) for an input with shape (100, 512, 512, 1). sample_weight cannot be broadcast.*
Next I tried following the advice of uschmidt83 (here) and flattening the output of my model along with the corresponding target data.
last_layer = keras.layers.Flatten()(second_last_layer)
target_data.shape -> (n_samples,512x512x1).
Weight_map.shape -> (n_samples,512x512x1).
Output:
ValueError: Found a sample_weight array for an input with shape (100, 262144). Timestep-wise sample weighting (use of sample_weight_mode="temporal") is restricted to outputs that are at least 3D, i.e. that have a time dimension.*
Oddly enough, even if I set sample_weight=None I still get the same error as right above.
Any advice on how to fix this sample_weight error? Here is the basic code to reproduce the error:
https://gist.github.com/andreimouraviev/2642384705034da92d6954dd9993fb4d
Also, if you have advice about how to deal the class imbalance problem, please let me know.

Weight needs to be a 1D array, where as target has an extra channel like input.
Can you try sample_weight_mode=temporal with the following dimensions:
input_image -> (n_samples, 512, 512, 1)
target_label -> (n_samples, 262144, 1)
weight_map -> (n_samples, 262144)
The following link contains information about class weights:
https://github.com/fchollet/keras/issues/2115

Related

Inputs to Encoder-Decoder LSTMCell/RNN Network

I'm creating an LSTM Encoder-Decoder Network, using Keras, following the code provided here: https://github.com/LukeTonin/keras-seq-2-seq-signal-prediction. The only change I made is to replace the GRUCell with an LSTMCell. Basically both the encoder and decoder consists of 2 layers, of 35 LSTMCells. The layers are stacked over (and combined with) each other using an RNN Layer.
The LSTMCell returns 2 states whereas the GRUCell returns 1 state. This is where I am encountering an error, as I do not know how to code for the 2 returned states of the LSTMCell.
I have created two models: first, an encoder-decoder model. Second, a prediction model. I am not encountering any problems in the encoder-decoder model, but a encountering problems in the decoder of the prediction model.
The error I am getting is:
ValueError: Layer rnn_4 expects 9 inputs, but it received 3 input tensors. Input received: [<tf.Tensor 'input_4:0' shape=(?, ?, 1) dtype=float32>, <tf.Tensor 'input_11:0' shape=(?, 35) dtype=float32>, <tf.Tensor 'input_12:0' shape=(?, 35) dtype=float32>]
This error happens when this line below, in the prediction model, is run:
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
The section of code this fits into is:
encoder_predict_model = keras.models.Model(encoder_inputs,
encoder_states)
decoder_states_inputs = []
# Read layers backwards to fit the format of initial_state
# For some reason, the states of the model are order backwards (state of the first layer at the end of the list)
# If instead of a GRU you were using an LSTM Cell, you would have to append two Input tensors since the LSTM has 2 states.
for hidden_neurons in layers[::-1]:
# One state for GRU, but two states for LSTMCell
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_outputs = decoder_outputs_and_states[0]
decoder_states = decoder_outputs_and_states[1:]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_predict_model = keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
Could somebody help me with the for loop above, and initial states I should be passing the decoder after that?
I had an similar error and i solved just doing what he says, adding another input tensor:
# If instead of a GRU you were using an LSTM Cell, you would have to append two Input tensors since the LSTM has 2 states.
for hidden_neurons in layers[::-1]:
# One state for GRU
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
here it solved the prolem...

How to set proper arguments to build keras Convolution2D NN model [Text Classification]?

I am trying to use 2D CNN to do text classification on Chinese Article and have trouble on setting arguments of keras Convolution2D. I know the basic flow of Convolution2D to cope with image, but stuck by using my dataset with keras.
Input data
My data is 9800 Chinese Article, max sentence length is 6810,with 200 word2vec size.
So the input shape is `(9800, 1, 6810, 200)`
Code for building model
MAX_FEATURES = 6810
# I just randomly pick one filter, seems this is the problem?
nb_filter = 128
input_shape = (1, 6810, 200)
# each word is 200 (word2vec size)
embedding_size = 200
# 3 word length
n_gram = 3
# so stride here is embedding_size*n_gram
model = Sequential()
model.add(Convolution2D(nb_filter, n_gram, embedding_size, border_mode='valid', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(100, 1), border_mode='valid'))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(hidden_dims))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# X is (9800, 1, 6810, 200)
model.fit(X, y, batch_size=32,
nb_epoch=5,
validation_split=0.1)
Question 1. I have problem to set Convolution2D arguments. My reseach is below,
The official docs do not contain an exmaple for 2D CNN text classifacation(though has 1D CNN).
Convolution2D defination is here https://keras.io/layers/convolutional/:
keras.layers.convolutional.Convolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation=None, weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True)
nb_filter: Number of convolution filters to use.
nb_row: Number of rows in the convolution kernel.
nb_col: Number of columns in the convolution kernel.
border_mode: 'valid', 'same' or 'full'. ('full' requires the Theano backend.)
Some research about the arguments:
This issue https://github.com/fchollet/keras/issues/233 is about 2D CNN for text classification, I read all comments and pick:
(1) https://github.com/fchollet/keras/issues/233#issuecomment-117427013
model.add(Convolution2D(nb_filter=N_FILTERS, stack_size=1, nb_row=FIELD_SIZE,
nb_col=1, subsample=(STRIDE, 1)))
(2) https://github.com/fchollet/keras/issues/233#issuecomment-117700913
sequential.add(Convolution2D(nb_feature_maps, 1, n_gram, embedding_size))
But it seems has some diference to current keras version, also the arguments naming by different people are in a mess (I hope keras has an easy understandable argument expanation).
Another comment I see about current api:
https://github.com/fchollet/keras/issues/1665#issuecomment-181181000
The current API is as below:
keras.layers.convolutional.Convolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation='linear', weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='th', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None)
So (36,1,7,7) seems the reason, the correct arguments would be (36,7,7,...).
By above research, on my understanding of convolution, Convolution2D create a (nb_filter, nb_row, nb_col) filter , by sliding a stride to get one filter result, recurse sliding, finally combine the result into array with shape (1, one_sample_article_length[6810] / nb_filter), and go to the next layer, is that right? Is my code below set nb_row and nb_col correct ?
Question 2. What is the proper MaxPooling2D arguments? (for my dateset or for commonm, either is OK)
I refer this issue https://github.com/fchollet/keras/issues/233#issuecomment-117427013 to set the argument, there are two kinds:
MaxPooling2D(poolsize=(((nb_features - FIELD_SIZE) / STRIDE) + 1, 1))
MaxPooling2D(poolsize=(maxlen - n_gram + 1, 1))
I have no idea why they calculate MaxPooling2D argument like that.
Question 3. Any recommendation for batch_size and nb_epoch to do such text classification? I have no idea at all.

Deconv implementation in keras output_shape issue

I am implementing following Colorization Model written in Caffe. I am confused about my output_shape parameter to supply in Keras
model.add(Deconvolution2D(256,4,4,border_mode='same',
output_shape=(None,3,14,14),subsample=(2,2),dim_ordering='th',name='deconv_8.1'))
I have added a dummy output_shape parameter. But how can I determine the output parameter? In caffe model the layer is defined as:
layer {
name: "conv8_1"
type: "Deconvolution"
bottom: "conv7_3norm"
top: "conv8_1"
convolution_param {
num_output: 256
kernel_size: 4
pad: 1
dilation: 1
stride: 2
}
If I do not supply this parameter the code give parameter error but I can not understand what should I supply as output_shape
p.s. already asked on data science forum page with no response. may be due to small user base
What output shape does the Caffe deconvolution layer produce?
For this colorization model in particular you can simply refer to page 24 of their paper (which is linked in their GitHub page):
So basically the output shape of this deconvolution layer in the original model is [None, 56, 56, 128]. This is what you want to pass to Keras as output_shape. The only problem is as I mention in the section below, Keras doesn't really use this parameter to determine the output shape, so you need to run a dummy prediction to find what your other parameters need to be in order for you to get what you want.
More generally the Caffe source code for computing its Deconvolution layer output shape is:
const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_extent - 2 * pad_data[i];
Which with a dilation argument equal to 1 reduces to just:
const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_shape_data[i] - 2 * pad_data[i];
Note that this matches the Keras documentation when the parameter a is zero:
Formula for calculation of the output shape 3, 4: o = s (i - 1) +
a + k - 2p
How to verify actual output shape with your Keras backend
This is tricky, because the actual output shape depends on the backend implementation and configuration. Keras is currently unable to find it on its own. So you actually have to execute a prediction on some dummy input to find the actual output shape. Here's an example of how to do this from the Keras docs for Deconvolution2D:
To pass the correct `output_shape` to this layer,
one could use a test model to predict and observe the actual output shape.
# Examples
```python
# apply a 3x3 transposed convolution with stride 1x1 and 3 output filters on a 12x12 image:
model = Sequential()
model.add(Deconvolution2D(3, 3, 3, output_shape=(None, 3, 14, 14), border_mode='valid', input_shape=(3, 12, 12)))
# Note that you will have to change the output_shape depending on the backend used.
# we can predict with the model and print the shape of the array.
dummy_input = np.ones((32, 3, 12, 12))
# For TensorFlow dummy_input = np.ones((32, 12, 12, 3))
preds = model.predict(dummy_input)
print(preds.shape)
# Theano GPU: (None, 3, 13, 13)
# Theano CPU: (None, 3, 14, 14)
# TensorFlow: (None, 14, 14, 3)
Reference: https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L507
Also you might be curious to know why is it that the output_shape parameter apparently doesn't really define the output shape. According to the post Deconvolution2D layer in keras this is why:
Back to Keras and how the above is implemented. Confusingly, the output_shape parameter is actually not used for determining the output shape of the layer, and instead they try to deduce it from the input, the kernel size and the stride, while assuming only valid output_shapes are supplied (though it's not checked in the code to be the case). The output_shape itself is only used as input to the backprop step. Thus, you must also specify the stride parameter (subsample in Keras) in order to get the desired result (which could've been determined by Keras from the given input shape, output shape and kernel size).

Keras deep autoencoder prediction is inaccurate

I am using a Keras deep autoencoder to reproduce my sparse matrix of [360, 6860] dimension. Each row is the count of trigrams for a protein sequence. The matrix has 2 classes of proteins, but I want the network to be ignorant of that initially, that is why I am using an autoencoder. I am following the keras blog autoencoder tutorial for this.
This is my code-
# this is the size of our encoded representations
encoding_dim = 32
input_img = Input(shape=(6860,))
encoded = Dense(128, activation='relu', activity_regularizer=regularizers.activity_l1(10e-5))(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded)
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(6860, activation='sigmoid')(decoded)
autoencoder = Model(input=input_img, output=decoded)
# this model maps an input to its encoded representation
encoder = Model(input=input_img, output=encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input_1 = Input(shape=(32,))
encoded_input_2 = Input(shape=(64,))
encoded_input_3 = Input(shape=(128,))
# retrieve the last layer of the autoencoder model
decoder_layer_1 = autoencoder.layers[-3]
decoder_layer_2 = autoencoder.layers[-2]
decoder_layer_3 = autoencoder.layers[-1]
# create the decoder model
decoder_1 = Model(input = encoded_input_1, output = decoder_layer_1(encoded_input_1))
decoder_2 = Model(input = encoded_input_2, output = decoder_layer_2(encoded_input_2))
decoder_3 = Model(input = encoded_input_3, output = decoder_layer_3(encoded_input_3))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
nb_epoch= 100,
batch_size=40,
shuffle=True,
validation_data=(x_test, x_test))
My validation set dimension is [80, 6860]. The problem is if I use the decoder to predict from the test set, my predictions are really off. For example if I predict with the following code-
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder_1.predict(encoded_imgs)
decoded_imgs = decoder_2.predict(decoded_imgs)
decoded_imgs = decoder_3.predict(decoded_imgs)
print x_test[3, np.where(x_test[3, :] != 0)[0]]
print (decoded_imgs[3, np.where(x_test[3, :] != 0)[0]])
a single row of my test set where the values are not zero are-
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
for the same row, the autoencoder's prediction of the same indices are-
[ 0.04615583 0.04613763 0.10268984 0.00286385 0.0030572 0.02551027
0.00552908 0.09686473 0.02554915 0.0082816 0.02254158 0.01127195
0.00305908 0.17113154 0.01140419 0.03370495 0.00515486 0.02614204
0.00558715 0.02835727 0.0029659 0.01425297 0.00834536 0.04502939
0.02260707 0.01131396 0.00561662 0.01131314 0.00493734 0.00265232
0.0056083 0.01724379 0.06099484 0.03738695 0.01128869 0.01995548
0.00562622 0.00556281 0.01732991 0.03142899 0.05339266 0.04778111
0.00292415 0.02264618 0.01419865 0.00550648 0.00836777 0.01139715]
Now, first I thought, maybe I can use some kind of thresholding to get the 1's from these values. But it seems they are pretty random. For a single row, for the first 50 zero values for my test set, my autoencoder predicts-
[ 0.14251608 0.00118295 0.00118732 0.00304095 0.031255 0.00108441
0.0201351 0.00853934 0.00558488 0.00281343 0.00296877 0.00109651
0.01129742 0.00827519 0.0170884 0.01417614 0.01714166 0.00549215
0.00099755 0.00558552 0.00829634 0.01988331 0.00092845 0.00294271
0.01429107 0.01137067 0.01137967 0.01121876 0.00491931 0.00562285
0.0055124 0.01720702 0.0142925 0.00553411 0.00551252 0.00281541
0.01145663 0.002876 0.00555185 0.00525392 0.01421779 0.00273949
0.01698892 0.02529835 0.0112521 0.01130333 0.00554186 0.00291986
0.00554437 0.01144382]
How can I improve the predictions? What am I doing wrong here? I must say that the data is hugely sparse. If you want you can download the toy data from here. Please, let me know if you have any questions.
One of the most important reasons is probably your training data size is just too small. You have a fully connected network and thus with 7 layers (including input and output) the number of parameters are just huge, close to 1.8M. You only have 360 training samples. So basically the parameters are untrained.
You can improve your work in two ways. One is of course to get more training data. The second is to follow the CNN example in the later part of the tutorial. CNN has been popular since it can greatly reduce the number of parameters.

Performing Kernel Density Estimations in MATLAB

I have been using MATLAB to perform Kernel Density Estimations (KDE) on UTM data (X and Y coordinates). I ran into a problem that I do not seem to be understanding.
I perform the KDEs with a sample of 45 points. Everything works fine and I produce the graphs with contours.
[bandwidth,density,X,Y]=kde2d(data)
The function kde2d is code by Zdravko Botev. I obtained it from his file exchange on MathWorks. The variable 'data' is a 45x2 array of my data. The first column holds the X coordinates and the second the Y.
The problem comes when I try to do the same line of code on a subset of those 45 points. I get a recurring error:
Error using fzero (line 274)
The function values at the interval endpoints must differ in sign.
Error in kde2d (line 101)
t_star=fzero(#(t)(t-evolve(t)),[0,0.1]);
I get the same error for a bunch of those subsets on a bunch of different sets of 45 points.
The complete set has these 45 values:
1594436.281 572258.1272
1594418.48 572357.5859
1594471.362 572385.5186
1594516.726 572266.8206
1594415.313 572369.2754
1594519.701 572272.7153
1594415.377 572363.4139
1594468.365 572381.5779
1594518.139 572276.6059
1594425.496 572271.6874
1594524.259 572272.7651
1594502.555 572172.8749
1594516.747 572264.867
1594485.314 572360.2689
1594476.027 572375.7997
1594556.087 572419.6609
1594522.718 572274.7021
1594472.775 572395.3039
1594554.568 572419.6443
1594527.255 572276.7054
1594474.315 572393.3669
1594522.697 572276.6557
1594471.319 572389.4262
1594460.854 572373.6799
1594546.022 572228.0609
1594460.79 572379.5414
1594468.323 572385.4855
1594466.953 572371.7926
1594519.722 572270.7614
1594396.76 572398.3826
1594468.131 572403.0693
1594418.288 572375.1697
1594396.377 572433.5499
1594448.287 572271.9361
1594510.541 572276.523
1594424.466 572226.7345
1594413.773 572371.2124
1594511.848 572296.0774
1594513.367 572296.094
1594424.488 572224.7805
1594468.152 572401.1153
1594421.37 572371.2953
1594446.768 572271.9195
1594468.152 572401.1153
1594448.799 572225.0457
One of the subsets I am trying to use is this:
1594436.281 572258.1272
1594418.48 572357.5859
1594471.362 572385.5186
1594516.726 572266.8206
1594415.313 572369.2754
1594519.701 572272.7153
1594415.377 572363.4139
1594468.365 572381.5779
1594518.139 572276.6059
1594425.496 572271.6874
I am not sure if I should include any of Botev's code. I am hoping that the error message can be explained on its own. If not I can provide more. Thank you very much.