I am learning Tensorflow 2.X. I am following this page to understand hinge loss.
I went through the standalone usage code.
Code is below -
y_true = [[0., 1.], [0., 0.]]
y_pred = [[0.6, 0.4], [0.4, 0.6]]
h = tf.keras.losses.Hinge()
h(y_true, y_pred).numpy()
the output is 1.3
I tried to manually calculate it & writing code by given formula
loss = maximum(1 - y_true * y_pred, 0)
my code -
y_true = tf.Variable([[0., 1.], [0., 0.]])
y_pred = tf.Variable([[0.6, 0.4], [0.4, 0.6]])
def hinge_loss(y_true, y_pred):
return tf.reduce_mean(tf.math.maximum(1. - y_true * y_pred, 0.))
print("Hinge Loss :: ", hinge_loss(y_true, y_pred).numpy())
But I am getting 0.9.
Where am i doing wrong ? Am i missing any concept here ?
Kindly guide.
You have to change the 0 values of the y_true to -1. In the link you shared it is mentioned that that if your y_true is originally {0,1} that you have to change it to {-1,1} for the Hinge Loss calculation. Then you will get the same value for the example which is 1.3.
From the link shared: https://www.tensorflow.org/api_docs/python/tf/keras/losses/Hinge
y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1.
import tensorflow as tf
y_true = tf.Variable([[0., 1.], [0., 0.]])
y_pred = tf.Variable([[0.6, 0.4], [0.4, 0.6]])
def hinge_loss(y_true, y_pred):
return tf.reduce_mean(tf.math.maximum(1. - y_true * y_pred, 0.))
# convert y_true from {0,1} to {-1,1} before passing them to hinge_loss
y_true = y_true * 2 - 1
print(hinge_loss(y_true, y_pred))
Output:
tf.Tensor(1.3, shape=(), dtype=float32)
Related
I am trying to run CNN with train MNIST, but test on my own written digits. To do that I wrote the following code but I getting an error in title of this questions:
I am trying to run CNN with train MNIST, but test on my own written digits. To do that I wrote the following code but I getting an error in title of this questions:
batch_size = 64
train_dataset = datasets.MNIST(root='./data/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = ImageFolder('my_digit_images/', transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
#print(self.conv1.weight.shape)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv3 = nn.Conv2d(20, 20, kernel_size=3)
#print(self.conv2.weight.shape)
self.mp = nn.MaxPool2d(2)
self.fc = nn.Linear(320, 10)
def forward(self, x):
in_size = x.size(0)
x = F.relu(self.conv1(x))
#print(x.shape)
x = F.relu(self.mp(self.conv2(x)))
x = F.relu(self.mp(self.conv3(x)))
#print("2.", x.shape)
# x = F.relu(self.mp(self.conv3(x)))
x = x.view(in_size, -1) # flatten the tensor
#print("3.", x.shape)
x = self.fc(x)
return F.log_softmax(x)
model = Net()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 10 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
MNIST dataset contains black and white 1-channel images, while yours are 3-channeled RGB probably. Either recode your images or preprocess it like
img = img[:,0:1,:,:]
You can do it with custom transform, adding it after transforms.ToTensor()
The images in training and testing should follow the same distribution. Since MNIST data is by default in Grayscale and it is expected that you didn't change the channels, then the model expects the same number of channels in testing.
The following code is an example of how it's done using a transformation.
Following the order defined below, it
Converts the image to a single channel (Grayscale)
Resize the image to the size of the default MNIST data
Convert the image to a tensor
Normalize the tensor to have same mean and std as that of during training(assuming that you used the same values).
test_dataset = ImageFolder('my_digit_images/', transform=transforms.Compose([transforms.Grayscale(num_output_channels=1),
transforms.Resize((28, 28)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
Assuming the following forward pass in a classic ANN
(Based on https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/):
Now let's use a sigmoid activation on that, I get:
So far so good, now let's check the result of this calculation in python:
1 / (1+ math.exp(-0.3775)) # ... = 0.5932699921071872, OK
However this is double precision and since Keras uses float32 let's calculate the same thing but with float32, I get:
w1 = np.array(0.15, dtype=np.float32)
i1 = np.array(0.05, dtype=np.float32)
w2 = np.array(0.2, dtype=np.float32)
i2 = np.array(0.1, dtype=np.float32)
b1 = np.array(0.35, dtype=np.float32)
k1 = np.array(1, dtype=np.float32)
k2= np.array(1, dtype=np.float32)
n1 = w1 * i1 + w2 * i2 + b1 * k1
np.array(k2 / (1 + np.exp(-n1)), dtype=np.float32) # --->array(0.59327, dtype=float32)
Ok so normally I should be able to get 0.59327 for our out_{h1} using Keras float32 framework.
Let's now try to get this result using Keras:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
import numpy as np
model = Sequential()
model.add(Dense(2,activation='sigmoid')) # Build only one layer (enough for example)
model.compile(optimizer = SGD(lr = 0.5),loss='mse') # Don't really care for this example, simply for compilation
input = np.array([[0.05, 0.10]]) # Set input, same as the ones provided in the example
output = np.array([[0.01, 0.99]]) # Don't really care for this example since we want to check activation only
weights = [np.array([[0.15, 0.20 ], # Required for set_weights
[0.25, 0.30]], dtype=np.float32),
np.array([0.35, 0.35], dtype=np.float32)]
model.build(input.shape) # Required for set_weights
model.set_weights(weights) # Set the weights to be the same as the one provided in the example
model.predict(np.array([[0.05, 0.10]], dtype=np.float32)) # This can be seen as out_{h1}
# array([[0.5944759 , 0.59628266]], dtype=float32)
# NOK: 0.5944759 != 0.59327
Can someone explain to me why I get 0.5944759 instead of 0.59327? The result seem far from the expected ouput and if possible provide an example of calculation and/or a way to get the expected output of 0.59327.
Please note this example was done using:
tensorflow 2.3.1
numpy 1.18.5
python 3.8.12
Thx for your help.
Tensorflow perform the Dense layer row to column:
It multiplies each row of your input with a column in the weights matrix, You need to transpose your weights matrix, if you do it you will get the correct result.
I validated using your code just modified:
weights = [np.array([[0.15, 0.20], # Required for set_weights
[0.2, 0.30]], dtype=np.float32),
np.array([0.35, 0.35], dtype=np.float32)]
To (pay attention to the transpose operator T:
weights = [np.array([[0.15, 0.20], # Required for set_weights
[0.2, 0.30]], dtype=np.float32).T,
np.array([0.35, 0.35], dtype=np.float32)]
I defined the following simple neural network:
import torch
import torch.nn as nn
X = torch.tensor(([1, 2]), dtype=torch.float)
y = torch.tensor([1.])
learning_rate = 0.001
class Neural_Network(nn.Module):
def __init__(self, ):
super(Neural_Network, self).__init__()
self.W1 = torch.nn.Parameter(torch.tensor(([1, 0], [2, 3]), dtype=torch.float, requires_grad=True))
self.W2 = torch.nn.Parameter(torch.tensor(([2], [1]), dtype=torch.float, requires_grad=True))
def forward(self, X):
self.xW1 = torch.matmul(X, self.W1)
self.h = torch.tensor([torch.tanh(self.xW1[0]), torch.tanh(self.xW1[1])])
return torch.sigmoid(torch.matmul(self.h, self.W2))
net = Neural_Network()
for z in range(60):
loss = (y - net(X))**2
optim = torch.optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
loss = criterion(net(X), y)
loss.backward()
optim.step()
I can run it and print(net.W1) print(net.W2) prints
Parameter containing:
tensor([[1., 0.],
[2., 3.]], requires_grad=True)
Parameter containing:
tensor([[2.0078],
[1.0078]], requires_grad=True)
So my problem is that it seems like W1 is not being updated.
When I call print(net.W1.grad) I get None for every iteration which confuses me a lot.
I tried to define the function as one line like so:
loss = (y - torch.sigmoid(math.tanh(x[0] * W_1[0][0] + x[1] * W_1[1][0]) * W_2[0] + math.tanh(x[0] * W_1[0][1] + x[1] * W_1[1][1]) * W_2[1])) ** 2, but it did not help anything.
For sure I could hardcode the derivate and everything but it seems painful and I though .backward() can be used in this case.
How can I optmize W1 with using backward()?
I suspect that the following line:
self.h = torch.tensor([torch.tanh(self.xW1[0]), torch.tanh(self.xW1[1])])
is the culprit.
The new tensor self.h does not inherit the requires_grad attribute from self.xW1 and, by default, it is set to False.
You can call self.h = self.tanh(self.xW1) and the operation will be then applied point-wise to all the elements of self.xW1.
In addition, I suggest you to inspect your gradients by using PyTorch hooks.
Following the issue #1435, I have an additional question to how to use GPflow.
I replicate the issue in an additional kernel: https://github.com/avalonhse/BayesNotebook/blob/master/Issue_2_GPFlow_Linear_Classification.ipynb
My purpose is fitting an additive kernel to a 2-dimensional data (squared exponential in dimension 1 and linear kernel in dimension 2). Following the instruction of #1435, I have been successfully fitting the model with kernel gpflow.kernels.Linear(variance= 0.1).
Linear kernel
However, when I use the kernel gpflow.kernels.Linear(active_dims=1,variance= 0.01) as I original planned, the model is not fitted. I used the GPy with same kernel as a reference then the result looks reasonable.
1-dim GPFlow kernel
import numpy as np
X = np.array([[ 9.96578428, 60.],[ 9.96578428, 40.],[ 9.96578428, 20.],
[10.96578428, 30.],[11.96578428, 40.],[12.96578428, 50.],
[12.96578428, 70.],[8.96578428, 30. ],[ 7.96578428, 40.],
[ 6.96578428, 50.],[ 6.96578428, 30.],[ 6.96578428, 10.],
[11.4655664 , 71.],[ 8.56605404, 63.],[12.41574177, 69.],
[10.61562964, 48.],[ 7.61470984, 51.],[ 9.31514956, 45.]])
Y = np.array([[1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1., 1., 0., 0., 1., 0.]]).T
# plotting
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
def plot(X,Y):
mask = Y[:, 0] == 1
plt.figure(figsize=(6, 6))
plt.plot(X[mask, 0], X[mask, 1], "oC0", mew=0, alpha=0.5)
plt.ylim(-10, 100)
plt.xlim(5, 15)
_ = plt.plot(X[np.logical_not(mask), 0], X[np.logical_not(mask), 1], "oC1", mew=0, alpha=0.5)
plot(X,Y)
# Evaluate real function and the predicted probability
res = 500
xx, yy = np.meshgrid(np.linspace(5, 15, res),
np.linspace(- 10, 120, res))
Xplot = np.vstack((xx.flatten(), yy.flatten())).T
# Code followed the Notebook : https://gpflow.readthedocs.io/en/develop/notebooks/basics/classification.html
import tensorflow as tf
import tensorflow_probability as tfp
import gpflow
from gpflow.utilities import print_summary, set_trainable, to_default_float
gpflow.config.set_default_summary_fmt("notebook")
def testGPFlow(k):
m = gpflow.models.VGP(
(X, Y),
kernel= k,
likelihood=gpflow.likelihoods.Bernoulli()
)
print("\n ########### Model before optimzation ########### \n")
print_summary(m)
print("\n ########### Model after optimzation ########### \n")
opt = gpflow.optimizers.Scipy()
res = opt.minimize(
m.training_loss, variables=m.trainable_variables, options=dict(maxiter=2500), method="L-BFGS-B"
)
print(' Message: ' + str(res.message) + '\n Status = ' + str(res.status) + '\n Number of iterations = ' + str(res.nit))
print_summary(m)
means, _ = m.predict_y(Xplot) # here we only care about the mean
y_prob = means.numpy().reshape(*xx.shape)
print("Fitting model using GPFlow")
plot(X,Y)
_ = plt.contour(
xx,
yy,
y_prob,
[0.5], # plot the p=0.5 contour line only
colors="k",
linewidths=1.8,
zorder=100,
)
k = gpflow.kernels.Linear(active_dims=[1],variance= 0.01)
testGPFlow(k)
k = gpflow.kernels.Linear(variance= 1)
testGPFlow(k)
The GPy code is for reference only to suggest how a fitted model should be. I am aware that GPy and GPflow use different methods. My question is why GPflow model does not fit when I specify the Linear kernel in 1 dimension.
Thanks for posting this question, Hoang, and for using GPflow.
When you specify input_dim in Gpy, you are telling the algorithm to act on two dimensions. Active_dims in GPflow behaves differently. It specifies which dimensions you want the kernel to act on. 'active_dims = 1' is telling GPflow to apply your linear kernel to only the y dimension.
Since you want your kernel to act on both x and y dimensions, you should specify active_dims = [0,1] rather than just 'active_dims = 1.' When I run your code with this fix, I get a result identical to GPy's result:
I'm trying to reproduce the results of this tutorial (see LASSO regression) on PyMC3. As commented on this reddit thread, the mixing for the first two coefficients wasn't good because the variables are correlated.
I tried implementing it in PyMC3 but it didn't work as expected when using Hamiltonian samplers. I could only get it working with the Metropolis sampler, which achieves the same result as PyMC2.
I don't know if it's something related to the fact that the Laplacian is peaked (discontinuous derivative at 0), but it worked perfectly well with Gaussian priors. I tried with or without MAP initialization and the result is always the same.
Here is my code:
from pymc import *
from scipy.stats import norm
import pylab as plt
# Same model as the tutorial
n = 1000
x1 = norm.rvs(0, 1, size=n)
x2 = -x1 + norm.rvs(0, 10**-3, size=n)
x3 = norm.rvs(0, 1, size=n)
y = 10 * x1 + 10 * x2 + 0.1 * x3
with Model() as model:
# Laplacian prior only works with Metropolis sampler
coef1 = Laplace('x1', 0, b=1/sqrt(2))
coef2 = Laplace('x2', 0, b=1/sqrt(2))
coef3 = Laplace('x3', 0, b=1/sqrt(2))
# Gaussian prior works with NUTS sampler
#coef1 = Normal('x1', mu = 0, sd = 1)
#coef2 = Normal('x2', mu = 0, sd = 1)
#coef3 = Normal('x3', mu = 0, sd = 1)
likelihood = Normal('y', mu= coef1 * x1 + coef2 * x2 + coef3 * x3, tau = 1, observed=y)
#step = Metropolis() # Works just like PyMC2
start = find_MAP() # Doesn't help
step = NUTS(state = start) # Doesn't work
trace = sample(10000, step, start = start, progressbar=True)
plt.figure(figsize=(7, 7))
traceplot(trace)
plt.tight_layout()
autocorrplot(trace)
summary(trace)
Here is the error I get:
PositiveDefiniteError: Simple check failed. Diagonal contains negatives
Am I doing something wrong or is the NUTS sampler not supposed to work on cases like this?
Whyking from the reddit thread gave the suggestion to use the MAP as scaling instead of the state and it actually worked wonders.
Here is a notebook with the results and the updated code.