How to pass multiple inputs in Keras? - neural-network

I would like to compare images with custom model. So, I have model with 2 inputs.
How to feed two inputs for train and validation?
If I do
self.classifier.fit([x1_train, x2_train], y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=([x1_valid, x2_valid], y_valid),
shuffle=True)
it complains
AttributeError: 'list' object has no attribute 'shape'

Related

What's the difference between using these 2 approaches to light gbm classifier?

I want to use some Light gbm functions properly.
This is standard approach, it's no different than any other classifier from sklearn:
define X, y
train_test_split
create classifier
fit on train
predict on test
compare
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
#here maybe DecisionTreeClassifier(), RandomForestClassifier() etc
model = lgb.LGBMClassifier()
model.fit(X_train, y_train)
predicted_y = model.predict(X_test)
print(metrics.classification_report())
but light gbm has its own functions like lgb.Dataset, Booster.
However, in this kaggle notebook, it's not calling LightGBMClassifier at all!
Why?
what is the standard order to call lgbm functions and train models the 'lgbm' way?
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
#why need this Dataset wrapper around x_train,y_train?
d_train = lgbm.Dataset(X_train, y_train)
#where is light gbm classifier()?
bst = lgbm.train(params, d_train, 50, early_stopping_rounds=100)
preds = bst.predict(y_test)
why does it train right away?
LightGBM has a few different API with different names of the methods (LGBMClassifier, Booster, train, etc.), parameters, and sometimes different types of data, that is why train method does not need to call LGBMClassifier but needs another type of dataset. There is no right/wrong/standard way - all of them are good if well used.
https://lightgbm.readthedocs.io/en/latest/Python-API.html#training-api

NN with Keras predicts classes as dtype=float32 as oppose to true class values of 1,2,3, why?

I am implementing a simple NN on wine data set. The NN works well and produces the prediction score, however, when I am trying to explore the actual predicted values on the test data set, I receive an array with dtype=float32 values, as oppose to values of the classes.
The classes are labelled as 1, 2, 3
I have 13 attributes and 178 observations (small data set)
Below is the the code on the implementation and the outcome I get:
df.head()
enter image description here
X=df.ix[:,1:13]
y= np.ravel(df.Type)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
scale the data:
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
define the NN
model = Sequential()
model.add(Dense(13, activation='relu', input_shape=(12,)))
model.add(Dense(4, activation='softmax'))
fit the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train1,epochs=20, batch_size=1, verbose=1)
Now this is where I store my predictions into y_pred and get the final score:
`y_pred = model.predict(X_test)`
`score = model.evaluate(X_test, y_test1,verbose=1)`
`59/59 [==============================] - 0s 2ms/step
[0.1106848283591917, 0.94915255247536356]`
When i explore y_pred I see the following:
`y_pred[:5]`
`array([[ 3.86571424e-04, 9.97601926e-01, 1.96467945e-03,
4.67598657e-05],
[ 2.67244829e-03, 9.87006545e-01, 7.04612210e-03,
3.27492505e-03],
[ 9.50196641e-04, 1.42343721e-04, 4.57215495e-02,
9.53185916e-01],
[ 9.03929677e-03, 9.63497698e-01, 2.62350030e-02,
1.22799736e-03],
[ 1.39460826e-05, 3.24015366e-03, 9.96408522e-01,
3.37353966e-04]], dtype=float32)`
Not sure why I do not see the actual predicted classes as 1,2,3?
After trying to convert into int I just get an array of zeros, as all values are so small.
Really appreciate your help!!
You are seeing the probabilities for each class. To convert probabilities to class just take the max of each case.
import numpy as np
y_pred_class = np.argmax(y_pred,axis=1)

How to fit data from InceptionV3 to ImageDataGenerator

How to fit data from InceptionV3 to ImageDataGenerator?
The examples I found for fitting data to ImageDataGenerator are for mnist or cifar10, like this:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# fit parameters from data
datagen.fit(X_train)
But can I fit data for InceptionV3 model to my ImageDataGenerator?
I load my Inception V3 model like:
base_model = InceptionV3(weights='imagenet', include_top=True)
datagen = ImageDataGenerator(...)
datagen.fit(base_model.get_layer('avg_pool').output)
But I get error saying 'ValueError: setting an array element with a sequence.'
I assume that you need to do this in two steps. First feed in your data into the InceptionV3 model and saving the output into a numpy array. Then feeding this numpy array into your second model.
First step like so (taken from here):
generator = datagen.flow_from_directory(
'data/train',
target_size=(150, 150),
batch_size=batch_size,
class_mode=None, # this means our generator will only yield batches of data, no labels
shuffle=False) # our data will be in order
bottleneck_features_train = model.predict_generator(generator, 2000)
np.save(open('bottleneck_features_train.npy', 'w'),
bottleneck_features_train)

keras neural network architecture incorrect

Here is a simple neural network that contains 3 input values and 3 output values.
The error :
ValueError: Error when checking model target: expected dense_78 to have shape (None, 3) but got array with shape (3, 1)
Is thrown when I execute this network. I've the set the final layer to have 3 possible outputs which match the number of labels :
model.add(Dense(3, activation='softmax'))
I've not architected this network correctly, where is my mistake ?
data = ([[ 0.29365378],
[ 0.27958957],
[ 0.27946938]])
labels = [[1], [2], [3]]
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
model.add(Dense(64, activation='relu', input_dim=1))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
model.fit(data, labels,
epochs=20,
batch_size=32)
A Dense(3...) will give you three outputs per sample.
The output of a Dense(3...) has shape (BatchSize,3), or (None,3) as Keras says it.
If you want one among 3 possible classes for each sample, then you must have labels with shape (BatchSize,3). Where in your case the batch size also seems to be 3.
You must format your labels in one-hot vectors:
class 1 = [1,0,0]
class 2 = [0,1,0]
class 3 = [0,0,1]
The to_categorical in keras.utils can help you with transforming numerical classes into one-hot vector classes.
If you have three samples, you must have labels as:
labels = [[1,0,0],[0,1,0],[0,0,1]]
Three samples, each sample with three possible classes, being the first sample class 1, the second sample class 2 and the third sample class 3.
This has shape (3,3) which will match the (None,3) demanded by Dense(3...).

How to check the weights after every epoc in Keras model

I am using the sequential model in Keras. I would like to check the weight of the model after every epoch. Could you please guide me on how to do so.
model = Sequential()
model.add(Embedding(max_features, 128, dropout=0.2))
model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics['accuracy'])
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=5 validation_data=(X_test, y_test))
Thanks in advance.
What you are looking for is a CallBack function. A callback is a Keras function which is called repetitively during the training at key points. It can be after a batch, an epoch or the whole training. See here for doc and the list of callbacks existing.
What you want is a custom CallBack that can be created with a LambdaCallBack object.
from keras.callbacks import LambdaCallback
model = Sequential()
model.add(Embedding(max_features, 128, dropout=0.2))
model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(1))
model.add(Activation('sigmoid'))
print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(model.layers[0].get_weights()))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics['accuracy'])
model.fit(X_train,
y_train,
batch_size=batch_size,
nb_epoch=5 validation_data=(X_test, y_test),
callbacks = [print_weights])
the code above should print your embedding weights model.layers[0].get_weights() at the end of every epoch. Up to you to print it where you want to make it readable, to dump it into a pickle file,...
Hope this helps