I'm getting a 0% accuracy for this keras neural network - neural-network

I am trying to make a binary classification on a subset of MNIST dataset. The goal is to predict whether a sample is 6 or 8. So, I have 784 pixel features for each sample and 8201 samples in the dataset. I built a network of one input layer, 2 hidden layers and one output layer. I am using sigmoid as activation function to output layer and relu for the hidden layers. I have no idea why I am getting a 0% accuracy at the end.
#import libraries
from keras.models import Sequential
from keras.layers import Dense
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import os
np.random.seed(7)
os.chdir('C:/Users/olivi/Documents/Python workspace')
#data loading
data = pd.read_csv('MNIST_CV.csv')
#Y target label
Y = data.iloc[:,0]
#X: features
X = data.iloc[:,1:]
X_train, X_test, y_train, y_test = train_test_split(X, Y,test_size=0.25,random_state=42)
# create model
model = Sequential()
model.add(Dense(392,kernel_initializer='normal',input_dim=784,
activation='relu'))
model.add(Dense(196,kernel_initializer='normal', activation='relu'))
model.add(Dense(98,kernel_initializer='normal', activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# Training the model
model.fit(X_train, y_train, epochs=100, batch_size=50)
print(model.predict(X_test,batch_size= 50))
score = model.evaluate(X_test, y_test)
print("\n Testing Accuracy:", score[1])

If you use binary cross-entropy, your labels should be either 0 or 1 (representing "is not number 6" or "is number 6" respectively).
If your Y target labels right now are the values 6 and 8, it'll fail.

Once you are choosing a subset of MNIST, you have to be sure how many different classes of digits there is in your sample (both training and test set).
So:
classes=len(np.unique(Y))
Then you should hot encode Y:
Y_train = np_utils.to_categorical(y_train, classes)
Y_test = np_utils.to_categorical(y_test, classes)
After that, change the last layer of your neural net to:
model.add(Dense(classes, activation='sigmoid'))
Finally:
model.predict_classes(X_test,batch_size= 50)
Be sure both training and test set have the same number of classes for Y.
After the prediction, find where 6 and 8's are located using np.where(), select this subsample and test your accuracy.

Related

How to use Gaussian Process Neural Network to make prediction

My training data shape is shown as follow:
x_train (5000, 300)
y_train (5000, 500)
So I am using 300 datapoints to prediction 500 datapoints, and there are 5000 sets for training.
When use ANN model to make prediction. It is straight forward:
model = Sequential()
model.add(Dense(50, input_dim = x_train.shape[1], activation = 'relu'))
model.add(Dense(50, activation = 'relu'))
model.add(Dense(output))
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, y_train, validation_data=(x_vali, y_vali), epochs =30, batch_size = 64,verbose=1, callbacks=[early_stop])
However, I am not sure how to change to Gaussian Process Neural Network or Bayesian Neural Networks

Predictions into the future Azure Machine Learning Studio Designer

I am currently developing an automated mechanism where I use the Azure Machine Learning Designer (AMLD). During development i used an 80/20 Split to test the efficency of my predictions.
Now i want to go live but I've missed the point where i can actually predict into the future.
I currently get a prediction for the last 20% of my data so i can compare them to the actual data. How do i change it so that the prediction actually starts at the end of my data?
A part of my prediction process is attached:
Continuing with the comments:
Example problem statement: Predicting salary based on experience.
The dataset consists of three columns and salary is the dependent variable and first two columns will be independent variables
The sample code starts from below.
#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#importing the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:-1].values
y = dataset.iloc[:, -1].values
#Training the linear regression model on complete dataset
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
#Training with the model.
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X)
lin_reg_2 = LinearRegression()
lin_reg_2.fit(X_poly, y)
As a sample I am giving polynomial regression
#visualize Linear regression results.
plt.scatter(X, y, color = 'red')
plt.plot(X, lin_reg.predict(X), color = 'blue')
plt.title('Truth or Bluff (Linear Regression)')
plt.xlabel('Position Level')
plt.ylabel('Salary')
plt.show()
#Visualize Polynomial regression results
plt.scatter(X, y, color = 'red')
plt.plot(X, lin_reg_2.predict(poly_reg.fit_transform(X)), color = 'blue')
plt.title('Truth or Bluff (Polynomial Regression)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
#Predicting new result with linear regression-> this can help you
lin_reg.predict([[6.5]])
#Predicting new result with polynomial regression
lin_reg_2.predict(poly_reg.fit_transform([[6.5]]))
Go through the flow of implementation of two different regression models on same dataset and how the difference in results will be.

Approximate the log-function using keras

I' m learning neural networks and I want to write a neural network to approximate the log-function. The x domain is 1 to 100.
I use keras as my tool, but the result is not good. Should I modify the loss function? or add more hidden layers? Finally, how to train my model.
I'm sorry for my poor English.
The code as following:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
x_train = np.linspace(1, 100, num=100)
y_train = np.log2(x_train)
model = Sequential()
model.add(Dense(units=500, input_dim=1, kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=500, kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=1, kernel_initializer='normal'))
model.compile(loss='mse', optimizer='adam', metrics=['mae'])
train_history = model.fit(x=x_train, y=y_train,
validation_split=0.2, epochs=100, batch_size=30,
verbose=2)
How should I modify my code? Please guide me.

Wrongly classified cases in a binary classification challenge (in keras)

The below defined neural network is used in order to classify the dataset seen in the image: .
Simulation statistics suggests that the classification accuracy is 50%, so my question is how do i know which cases of the dataset where not classified correctly?
from keras.models import Sequential
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("sorted output.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:3]
Y = dataset[:,3]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=3, init='uniform', activation='relu'))
model.add(Dense(3, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)
Compare model.predict(X_train) with y_train. In order to do that, you can add the rows
train_prediction=np.round(model.predict(X_train)).reshape(-1)
train_prediction=train_prediction.astype(int)
to the end of your code. Then you can look at the nonzero entries in train_prediction-y_train. The positions of these entries are the places where the model made missclassification.
The reason for the np.round is that your last activation function is sigmoid. This means that values that are closer to 0 are classified as 0, and those that are closer to 1 are classified as 1.

Neural Network Worse than Expected on Simple Input

I have inputs that are binary (0,1) and outputs that are binary (0,1). More than 80% of the time, the binary input is equal to the binary output. However, when I train a keras neural network I get an accuracy that goes to .6. There are 1000 such inputs. Here is the network setup in Keras:
model = Sequential()
model = Sequential()
model.add(Dense(12, input_dim=1, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
This seems very strange. What could the problem be?