For my job, i have to implement some simple machine learning...
I didn't have a big background on math (so it's hard to understand what i'm doing).
So my first attempt to do something with TF, is to compute Multivariate linear regression...
I take this litte data pool:
Conso,DJUclim,DJUchau
171408,0,282.8
151620,0.9,171.6
164475,2.7,137.8
153866,10,99.5
162933,65.6,32.4
188475,183,0.8
210994,231.5,0.2
222873,256.3,0
179239,109.9,9.2
159162,45.9,32.5
158104,4.7,142.6
174184,0.6,227.9
And try to found the best value for X1, X2, and B for Conso = X1*DJUchau + X2*DJUclim + B
With excel i have found :
X1 = 118.734745
X2 = 306.035978
B = 140288,882921
and
r_square = 94.8375%
rmse = 5660.507380
Then I try do samething with TensorFlow ...
after +14k iteration i found :
X1 = 118.689559
X2 = 305.991638
B = 140296.921875
and
r_square = 94.8367%
rmse = 4902.14502
Why i don't have same value ?
What is the rightest result ? ( excel or my ML ) ?
Why Excel do this instantly and why tensorflow need lot of trainning ?
TF for simple regression is too overkill and reduce perf ?
Related
I'm trying to convolve two 1D tensors in Keras.
I get two inputs from other models:
x - of length 100
ker - of length 5
I would like to get the 1D convolution of x using the kernel ker.
I wrote a Lambda layer to do it:
import tensorflow as tf
def convolve1d(x):
y = tf.nn.conv1d(value=x[0], filters=x[1], padding='VALID', stride=1)
return y
x = Input(shape=(100,))
ker = Input(shape=(5,))
y = Lambda(convolve1d)([x,ker])
model = Model([x,ker], [y])
I get the following error:
ValueError: Shape must be rank 4 but is rank 3 for 'lambda_67/conv1d/Conv2D' (op: 'Conv2D') with input shapes: [?,1,100], [1,?,5].
Can anyone help me understand how to fix it?
It was much harder than I expected because Keras and Tensorflow don't expect any batch dimension in the convolution kernel so I had to write the loop over the batch dimension myself, which requires to specify batch_shape instead of just shape in the Input layer. Here it is :
import numpy as np
import tensorflow as tf
import keras
from keras import backend as K
from keras import Input, Model
from keras.layers import Lambda
def convolve1d(x):
input, kernel = x
output_list = []
if K.image_data_format() == 'channels_last':
kernel = K.expand_dims(kernel, axis=-2)
else:
kernel = K.expand_dims(kernel, axis=0)
for i in range(batch_size): # Loop over batch dimension
output_temp = tf.nn.conv1d(value=input[i:i+1, :, :],
filters=kernel[i, :, :],
padding='VALID',
stride=1)
output_list.append(output_temp)
print(K.int_shape(output_temp))
return K.concatenate(output_list, axis=0)
batch_input_shape = (1, 100, 1)
batch_kernel_shape = (1, 5, 1)
x = Input(batch_shape=batch_input_shape)
ker = Input(batch_shape=batch_kernel_shape)
y = Lambda(convolve1d)([x,ker])
model = Model([x, ker], [y])
a = np.ones(batch_input_shape)
b = np.ones(batch_kernel_shape)
c = model.predict([a, b])
In the current state :
It doesn't work for inputs (x) with multiple channels.
If you provide several filters, you get as many outputs, each being the convolution of the input with the corresponding kernel.
From given code it is difficult to point out what you mean when you say
is it possible
But if what you mean is to merge two layers and feed merged layer to convulation, yes it is possible.
x = Input(shape=(100,))
ker = Input(shape=(5,))
merged = keras.layers.concatenate([x,ker], axis=-1)
y = K.conv1d(merged, 'same')
model = Model([x,ker], y)
EDIT:
#user2179331 thanks for clarifying your intention. Now you are using Lambda Class incorrectly, that is why the error message is showing.
But what you are trying to do can be achieved using keras.backend layers.
Though be noted that when using lower level layers you will lose some higher level abstraction. E.g when using keras.backend.conv1d you need to have input shape of (BATCH_SIZE,width, channels) and kernel with shape of (kernel_size,input_channels,output_channels). So in your case let as assume the x has channels of 1(input channels ==1) and y also have the same number of channels(output channels == 1).
So your code now can be refactored as follows
from keras import backend as K
def convolve1d(x,kernel):
y = K.conv1d(x,kernel, padding='valid', strides=1,data_format="channels_last")
return y
input_channels = 1
output_channels = 1
kernel_width = 5
input_width = 100
ker = K.variable(K.random_uniform([kernel_width,input_channels,output_channels]),K.floatx())
x = Input(shape=(input_width,input_channels)
y = convolve1d(x,ker)
I guess I have understood what you mean. Given the wrong example code below:
input_signal = Input(shape=(L), name='input_signal')
input_h = Input(shape=(N), name='input_h')
faded= Lambda(lambda x: tf.nn.conv1d(input, x))(input_h)
You want to convolute each signal vector with different fading coefficients vector.
The 'conv' operation in TensorFlow, etc. tf.nn.conv1d, only support a fixed value kernel. Therefore, the code above can not run as you want.
I have no idea, too. The code you given can run normally, however, it is too complex and not efficient. In my idea, another feasible but also inefficient way is to multiply with the Toeplitz matrix whose row vector is the shifted fading coefficients vector. When the signal vector is too long, the matrix will be extremely large.
I need some help... I have got a model function which is :
function y = Surf(param,x);
global af1 af2 tData % A2 mER2
A1 = param(1); m1 = param(2); A2 = param(3); m2 = param(4);
m = param(5); n = param(6);
k1 = #(T) A1*exp(mER1/T);
k2 = #(T) A2*exp(mER2/T);
af = #(T) sech(af1*T+af2);
y = zeros(length(x),1);
for i = 1:length(x)
a = x(i,1); T = temperature(i,1);
y(i) = (k2(T)+k1(T)*(a.^m))*((af(T)-a).^n);
end
end
And I have got a set of Data giving Cure, Cure_rate, Temperature. Which are all in a single vertical column matrix.
Basically, I tried to use :
[output,R1] = lsqcurvefit(#Surf, initial_guess, Cure, Cure_rate)
[output2,R2] = nlinfit(Cure,Cure_rate,#Surf,initial_guess)
And they works pretty well, (my initial_guess are initial guess of parameters in the above model which is in : [1.1e+07 -7.8e+03 1.2e+06 -7.1e+03 2.2 0.72])
My main problem is, when I try to look into different methods which could do nonlinear regression such as fminsearch, fmincon, fsolve, fminunc, etc. They just dont work and I am quite confused about the input that I am considering. Mainly beacuse they dont work as same as nlinfit and lsqcurvefit (input of Cure, Cure_rate), most of them considered the model function and the initial guess only, The way I did the above:
output3 = fminsearch(#Surf,initial_guess)
output4 = fsolve(#Surf,initial_guess)
output5 = fmincon(#Surf,x0,A,b,Aeq,beq)
(Not sure what should I put for Linear Inequality Constraint:
A,b and Aeq,beq )
output6 = fminunc(#Surf,initial_guess)
The problem is Matlab keep saying either I have not enough input or too many input which I don't get it and how should I include my Dataset in the fitting function (Cure, Cure_rate) in the above functions, like in nlinfit and lsqcurvefit?
After having some basics understanding of GPML toolbox , I written my first code using these tools. I have a data matrix namely data consist of two array values of total size 1000. I want to use this matrix to estimate the GP value using GPML toolbox. I have written my code as follows :
x = data(1:200,1); %training inputs
Y = data(1:201,2); %, training targets
Ys = data(201:400,2);
Xs = data(201:400,1); %possibly test cases
covfunc = {#covSE, 3};
ell = 1/4; sf = 1;
hyp.cov = log([ell; sf]);
likfunc = #likGauss;
sn = 0.1;
hyp.lik = log(sn);
[ymu ys2 fmu fs2] = gp(hyp, #infExact, [], covfunc, likfunc,X,Y,Xs,Ys);
plot(Xs, fmu);
But when I am running this code getting error 'After having some basics understanding of GPML toolbox , I written my first code using these tools. I have a data matrix namely data consist of two array values of total size 1000. I want to use this matrix to estimate the GP value using GPML toolbox. I have written my code as follows :
x = data(1:200,1); %training inputs
Y = data(1:201,2); %, training targets
Ys = data(201:400,2);
Xs = data(201:400,1); %possibly test cases
covfunc = {#covSE, 3};
ell = 1/4; sf = 1;
hyp.cov = log([ell; sf]);
likfunc = #likGauss;
sn = 0.1;
hyp.lik = log(sn);
[ymu ys2 fmu fs2] = gp(hyp, #infExact, [], covfunc, likfunc,X,Y,Xs,Ys);
plot(Xs, fmu);
But when I am running this code getting:
Error using covMaha (line 58) Parameter mode is either 'eye', 'iso',
'ard', 'proj', 'fact', or 'vlen'
Please if possible help me to figure out where I am making mistake ?
I know this is way late, but I just ran into this myself. The way to fix it is to change
covfunc = {#covSE, 3};
to something like
covfunc = {#covSE, 'iso'};
It doesn't have to be 'iso', it can be any of the options listed in the error message. Just make sure your hyperparameters are set correctly for the specific mode you choose. This is detailed more in the covMaha.m file in GPML.
I am trying to run Ridge regression in Matlab (ridge): L is some matrix, x is some random vector and y=Lx+αn is another vector. I expect ridge(y,L,α) to return the same result as:
(LL'+α^2I)^(−1)L'y, but the latter is significantly better. I can't understand the problem as I think this is exactly what ridge() does. I even tried with α=1.
For example:
n = randn(N^2,1);
n2 = randn(N^2,1);
L = (some N^2*N^2 matrix);
y = L*n + n2;
x_ridge = ridge(y,L,1);
x_ls = (L*L' + eye(N^2))^-1*L'*y;
and x_ls and x_ridge are significantly different.
Appreciate any help!
I need to perform some elementary histogram matching on 2 sets of 3D data. This is part of a larger algorithm.
My goal is to perform this by minimising the following cost function:
|| cumpdf(f(A)) - cumpdf(B) || .^2
where:
cumpdf is the cumulative histogram
f() is linear transformation a*A + b where a/b are affine coefficients to be
determined
A is the image to be transformed and B is the image to be matched
I am using lsqcurvefit however I have run into some trouble and therefore really need some help.
A(maskA==0)=0;
B(maskB==0)=0;
[na,~] = hist(A(maskA~=0),500);
na = na ./ numel(A(maskA~=0));
x_data = cumsum(na);
[nb,~] = hist(B(maskB~=0),500);
nb = nb ./ numel(B(maskB~=0));
y_data = cumsum(nb);
xo = [1.5 -200];
[coeff,~] = lsqcurvefit(#cost,xo,x_data,y_data);
function F = cost(x,xc)
F = x(1).*A + x(2);
[nc,~] = hist(C(maskA~=0),500);
nc = nc / numel(C(maskA~=0));
xc = cumsum(nc);
Amask and Bmask just represent some indexing I need to do.
My question is: I know that the above is wrong. However, I think it represents best what I want to do, regarding the cost function and the goal. Some help would me much appreciated!