How to address all faces of a cube (3d array) in matlab? - matlab

I need to assign boundary values to a 3d box.
Assuming I have z = rand(L,M,N), is there a better way to address all the faces of this box than processing all 6 faces individually, z(:,:,1), z(:,:,end), z(:,1,:), z(:,end,:), z(1,:,:), z(end,:,:)?
This is what I have right now:
L=3;M=4;N=5;
z = rand(L,M,N);
bv = 0;
z([1,end],:,:) = bv;
z(:,[1,end],:) = bv;
z(:,:,[1,end]) = bv;
I would like to be able to do something like z(indices) = bv.

If you have the Image Processing toolbox, using padarray would work:
z = padarray(z(2:end-1,2:end-1,2:end-1), [1 1 1], bv);
This just takes the inner block of the cube and adds 1 copy of bv on all sides.

Not sure if this is any better than your code, but here it goes:
%// Data
L = 3;
M = 4;
N = 5;
z = rand(L,M,N)
newValue = 0;
%// Let's go
indL = false(L, 1, 1);
indM = false(1, M, 1);
indN = false(1, 1, N);
indL([1 end]) = true;
indM([1 end]) = true;
indN([1 end]) = true;
ind = bsxfun(#or, bsxfun(#or, indL, indM), indN); %// linear index of all faces
z(ind) = newValue
Before:
z(:,:,1) =
0.2653 0.7302 0.1078 0.8178
0.8244 0.3439 0.9063 0.2607
0.9827 0.5841 0.8797 0.5944
z(:,:,2) =
0.0225 0.1615 0.0942 0.6959
0.4253 0.1788 0.5985 0.6999
0.3127 0.4229 0.4709 0.6385
z(:,:,3) =
0.0336 0.5309 0.8200 0.5313
0.0688 0.6544 0.7184 0.3251
0.3196 0.4076 0.9686 0.1056
z(:,:,4) =
0.6110 0.0908 0.2810 0.4574
0.7788 0.2665 0.4401 0.8754
0.4235 0.1537 0.5271 0.5181
z(:,:,5) =
0.9436 0.2407 0.6718 0.2548
0.6377 0.6761 0.6951 0.2240
0.9577 0.2891 0.0680 0.6678
After:
z(:,:,1) =
0 0 0 0
0 0 0 0
0 0 0 0
z(:,:,2) =
0 0 0 0
0 0.1788 0.5985 0
0 0 0 0
z(:,:,3) =
0 0 0 0
0 0.6544 0.7184 0
0 0 0 0
z(:,:,4) =
0 0 0 0
0 0.2665 0.4401 0
0 0 0 0
z(:,:,5) =
0 0 0 0
0 0 0 0
0 0 0 0

Related

How to find homography matrix

I have 4 point locations in source and destination images. I wrote it on a homography matrix. I need a help to find H matrix on MATLAB.
sx1 = 285; sx2 = 576; sx3 = 583; sx4 = 295; %source image x locations
sy1 = 505; sy2 = 454; sy3 = 868; sy4 = 858; %source image y locations
dx1 = 439; dx2 = 786; dx3 = 450; dx4 = 789; %destination image x locations
dy1 = 539; dy2 = 528; dy3 = 878; dy4 = 882; %destination image y locations
A = [sx1 sy1 1 0 0 0 -sx1*dx1 -sy1*dx1; sx2 sy2 1 0 0 0 -sx2*dx2 -sy2*dy2; sx3 sy3 1 0 0 0 -sx3*dx3 -sy3*dy3; sx4 sy4 1 0 0 0 -sx4*dx4 -sy4*dy4;
0 0 0 sx1 sy1 1 -sx1*dy1 -sy1*dy1; 0 0 0 sx2 sy2 1 -sx2*dy2 -sy2*dy2; 0 0 0 sx3 sy3 1 -sx3*dy3 -sy3*dy3; 0 0 0 sx4 sy4 1 -sx4*dy4 -sy4*dy4];
B = [dx1; dx2; dx3; dx4; dy1; dy2; dy3; dy4];

How to fix up the error in matrix dimensions in MATLAB R2016b

I am working on a MATLAB code that involves deep learning using neural networks.
The image or the data is being fed in the form of matrices.
But I am getting an error "Matrix dimensions must agree".
Can Someone please help me with this problem?
I tried to solve this problem by using .* in stead of matrix multiplication * but the method didn't work.
Function Deeplearningoriginal:
function [w1,w2,w3,w4] = Deeplearningoriginal(w1,w2,w3,w4,input_Image,correct_output)
alpha=0.01;
N=5;
for k = 1:N
input_Image = reshape( input_Image( :, :,k ),25 ,1);
input_of_hidden_layer1 = w1* input_Image;
output_of_hidden_layer1 = ReLU(input_of_hidden_layer1);
input_of_hidden_layer2 = w2* output_of_hidden_layer1;
output_of_hidden_layer2 = ReLU( input_of_hidden_layer2);
input_of_hidden_layer3 = w3* output_of_hidden_layer2;
output_of_hidden_layer3 = ReLU(input_of_hidden_layer3);
input_of_output_node = w4* output_of_hidden_layer3;
final_output = Softmax(input_of_output_node);
correct_output_transpose = correct_output(k,:);
error = correct_output_transpose - final_output;
delta4 = error;
error_of_hidden_layer3 = w4'* delta4;
delta3 = (input_of_hidden_layer3>0).*error_of_hidden_layer3;
error_of_hidden_layer2 = w3'* delta3;
delta2 = (input_of_hidden_layer2>0).* error_of_hidden_layer2;
error_of_hidden_layer1 = w2'*delta2;
delta1 = (input_of_hidden_layer1>0).* error_of_hidden_layer1;
adjustment_of_w4 = alpha*delta4*output_of_hidden_layer3';
adjustment_of_w3 = alpha*delta3*output_of_hidden_layer2';
adjustment_of_w2 = alpha*delta2*output_of_hidden_layer1';
adjustment_of_w1 = alpha*delta1*reshaped_input_image';
w1 = w1 + adjustment_of_w1;
w2 = w2 + adjustment_of_w2;
w3 = w3 + adjustment_of_w3;
w4 = w4 + adjustment_of_w4;
end
end
Training network:
input_Image = zeros (5,5,5);
input_Image(:,:,1) = [ 1 0 0 1 1;
1 1 0 1 1;
1 1 0 1 1;
1 1 0 1 1;
1 0 0 0 1;
];
input_Image(:,:,2) = [ 0 0 0 0 1;
1 1 1 1 0;
1 0 0 0 1;
0 1 1 1 1;
0 0 0 0 0;
];
input_Image(:,:,3) = [ 0 0 0 0 1;
1 1 0 0 1;
1 0 1 0 1;
0 0 0 0 0;
1 1 1 0 1;
];
input_Image(:,:,4) = [ 1 1 1 0 1;
1 1 0 0 1;
1 0 1 0 1;
0 0 0 0 0;
1 1 1 0 1;
];
input_Image(:,:,5) = [ 0 0 0 0 0;
0 1 1 1 1;
0 0 0 0 1;
1 1 1 1 0;
0 0 0 0 1;
];
correct_output = [ 1 0 0 0 0;
0 1 0 0 0;
0 0 1 0 0;
0 0 0 1 0;
0 0 0 0 1;
];
w1 = 2* rand(20,25) -1;
w2 = 2* rand(20,20) -1;
w3 = 2* rand(20,20) -1;
w4 = 2* rand(5,20) -1;
for epoch = 1:100
[w1,w2,w3,w4] = Deeplearningoriginal(w1,w2,w3,w4,input_Image,correct_output);
end
I expected this code to run but because of the error I am not able to proceed.
The problem is the reshape (actually, two problems). After the
input_image = reshape(input_image(:,:,k), 25,1);
input_image is an array with 25 rows and 1 column, whereas w2, w3, and w4 have only 20 columns. To do the matrix multiplication A*B, A must have as many columns as B has rows.
The other problem with the reshape as written is that after the first pass through the loop, input_image is no longer a 5x5x5 array, it is a 25x1 array that contains only the elements of input_image(:,:,1). It is necessary to use a different name on the left-hand-side of the assignment (and throughout the rest of the loop) to avoid loosing the content of input_image.
Hope this helps,
JAC

Hopfield Network Implementation using Perceptron Learning Rule

I am stuck on implementation of the Hopfiled network with perceptron learning rule. The idea here is to learn the weights for a pattern (binary vector) using single-layer perceptron, and then perform associative memory task using standard Hopfield algorithm. However, after I try to recall a stored vector, the output does not converge to the correct pattern.
See the link, Section 13.4, for more detail on Perceptron implementation the code follows.
w = rand(1,21); %weights
d = [0 0 0 0 0 0]; %desired output
eta = .7; %learning rate
x =[1 1 0 0 0 0]; %memorized pattern
z = zeros(6, 21);
z(1,:) = [x(2) x(3) x(4) x(5) x(6) 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0];
z(2,:) = [x(1) 0 0 0 0 x(3) x(4) x(5) x(6) 0 0 0 0 0 0 0 -1 0 0 0 0];
z(3,:) = [0 x(1) 0 0 0 x(2) 0 0 0 x(4) x(5) x(6) 0 0 0 0 0 -1 0 0 0];
z(4,:) = [0 0 x(1) 0 0 0 x(2) 0 0 x(3) 0 0 x(5) x(6) 0 0 0 0 -1 0 0];
z(5,:) = [0 0 0 x(1) 0 0 0 x(2) 0 0 x(3) 0 x(4) 0 x(6) 0 0 0 0 -1 0];
z(6,:) = [0 0 0 0 x(1) 0 0 0 x(2) 0 0 x(3) 0 x(4) x(5) 0 0 0 0 0 -1];
for t = 1:100
index = randperm(6);
for i=1:6
Y(t,index(i)) = z(index(i),:)*w';
y(t,index(i)) = sgn(Y(t,index(i)));
w = w + eta*(d(index(i)) - y(t,index(i)))*z(index(i),:);
end
end
n = 6;
probe = input('Enter the probe vector: ');
signal_vector = 2*probe-1; % Convert probe to bipolar form
flag = 0; % Initialize flag
old = signal_vector; %save original input
while flag ~= 6; % test if old vector is same as new vector
index = randperm(n); %make sequence for asynchronous update
for j = 1:n
v =z(index(j),:)*w';
if v > 0
signal_vector(index(j)) = 1;
x(index(j)) = 1;
elseif v < 0
signal_vector(index(j)) = -1;
x(index(j)) = -1;
end
end
flag = signal_vector*old';
end
disp('The recalled vector is ')
0.5*(signal_vector + 1)
function y = sgn(x)
if x > 0
y = 1;
else
y = -1;
end
end

Matlab/Octave 1-of-K representation

I have a y of size 5000,1 (matrix), which contains integers between 1 and 10. I want to expand those indices into a 1-of-10 vector. I.e., y contains 1,2,3... and I want it to "expand" to:
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
What is the best way to do that?
I tried:
Y = zeros(5000,10); Y(y) = 1;
but it didn't work.
It works for vectors though:
if y = [2 5 7], and Y = zeros(1,10), then Y(y) = [0 1 0 0 1 0 1 0 0 0].
Consider the following:
y = randi([1 10],[5 1]); %# vector of 5 numbers in the range [1,10]
yy = bsxfun(#eq, y, 1:10)'; %# 1-of-10 encoding
Example:
>> y'
ans =
8 8 4 7 2
>> yy
yy =
0 0 0 0 0
0 0 0 0 1
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
1 1 0 0 0
0 0 0 0 0
0 0 0 0 0
n=5
Y = ceil(10*rand(n,1))
Yexp = zeros(n,10);
Yexp(sub2ind(size(Yexp),1:n,Y')) = 1
Also, consider using sparse, as in: Creating Indicator Matrix.
While sparse may be faster and save memory, an answer involving eye() would be more elegant as it is faster than a loop and it was introduced during the octave lecture of that class
Here is an example for 1 to 4
V = [3;2;1;4];
I = eye(4);
Vk = I(V, :);
You can try cellfun operations:
function vector = onehot(vector,decimal)
vector(decimal)=1;
end
aa=zeros(10,2);
dec=[5,6];
%split into columns
C=num2cell(aa,1);
D=num2cell(dec,1);
onehotmat=cellfun("onehot",C,D,"UniformOutput",false);
output=cell2mat(onehotmat);
I think you mean:
y = [2 5 7];
Y = zeros(5000,10);
Y(:,y) = 1;
After the question edit, it should be this instead:
y = [2,5,7,9,1,4,5,7,8,9....]; //(size (1,5000))
for i = 1:5000
Y(i,y(i)) = 1;
end

Set specified indices to zero

I have two matrices (x1 and x2) with same size. I would like to use the elements equal to zero in x1 to put the sames elements to zero in x2.
The non working solution I've got now follows:
[i j] = find(x1 == 0);
x2(i,j) = 0;
I've also got a working solution which is:
[i j] = find(x1 == 0);
for n=1:length(i)
x2(i(n),j(n)) = 0;
end
thanks!
Try x2(x1 == 0) = 0. For example:
>> x1 = rand(5, 5)
x1 =
0.4229 0.6999 0.5309 0.9686 0.7788
0.0942 0.6385 0.6544 0.5313 0.4235
0.5985 0.0336 0.4076 0.3251 0.0908
0.4709 0.0688 0.8200 0.1056 0.2665
0.6959 0.3196 0.7184 0.6110 0.1537
>> x2 = randi(2, 5, 5) - 1
x2 =
0 1 1 0 1
0 1 0 0 1
1 1 1 1 0
0 1 1 1 1
1 0 0 0 0
>> x1(x2 == 0) = 0
x1 =
0 0.6999 0.5309 0 0.7788
0 0.6385 0 0 0.4235
0.5985 0.0336 0.4076 0.3251 0
0 0.0688 0.8200 0.1056 0.2665
0.6959 0 0 0 0