Residual Neural Network: Concatenation or Element Addition? - neural-network

With the residual block in residual neural networks, is the addition at the end of the block true element addition or is it concatenation?
For example, would addition([1, 2], [3, 4]) produce [1, 2, 3, 4] or [4, 6] ?

It would result in [4, 6], and you can find out more in this paper

The operation F + x is performed by a shortcut
connection and element-wise addition
It is from the popular ResNet paper by Microsoft Research. Therefore it is element-wise addition, hence [4, 6]

Related

Find if coordinates form a complete path

So I have a list of points that generally form a sort of circular shape except there are often little offshoots from the circle that are essentially just lines from say the border of the circle going in a certain direction. I want to create a function that when given this list of coordinates/points finds whether there exists a complete path in this set of points.
I've thought about creating a start point and finding whether there exists a path that doesn't repeat points (ie (1,1) -> (2, 1) -> (1,1) disallowed) and could get back to the start point; however, if the start point is in an offshoot of the circle, this wouldn't work.
For instance, a list of coordinates
[[0, 0], [0, 1], [1, 2], [2, 3], [3, 3], [3, 4], [4, 4], [3, 2], [3, 1], [3, 0], [2, -1], [1, -1], [0, -1]]
would form a complete path while if I take out [1, -1] it would not form a complete path.
The solution I went with is to turn the list of neighbors into a matrix, turn that matrix into a matrix of logicals, and then assuming you know for sure a point within the loop, use imfill function on that point and check if the [1,1] coordinate had been converted.
mat = zeros(length, length);
mat(coordinates) = 1;
mat = logical(mat);
mat = imfill(mat, [length/2 length/2]);
if mat(1) == 1
not closed loop
else
closed loop
end
Several assumptions with this code are that the middle point lies within the loop and not already filled and the (1,1) coordinate value will not already be filled which are assumptions I can make with the data I am working with.

How to draw a 3D surface plot with numerical datas on matlab

I want to draw a surface on matlab with numerical datas like :
f(x,y,z) = w
f(1,3,5) = 12
f(2,4,6)= 3
f(3,8,12)= 2
f(2,13,22)= 1
and etc...
i found plot::matrixplot on mupad but it has two problems :
first it's on mupad but i want something that do the same on script environment
second it draw only these types of data : f(x,y)=z
sample of mupad :
A := [[2, 1, 1],
[3, 4, 3],
[3, 5, 4],
[2, 6, 5]]:
plot(plot::Matrixplot(A))
sample of plot::matrixplot

Matlab Plotting Two Matrixes and Marking some X-Coordinates On it Based off another vector

Say I have two vectors I want to plot on matlab, and I have this vector that I want to use to mark a small "X" on the plot where this X-value occurs on one of the vectors, how do I do that?
To clarify, say I have a vector of a = [1, 2, 3, 4, 5] another of b = [1, 2, 3, 4, 5, 6] and an identifier vector of a = [1, 4] how do I plot these and show an X on a/b on the plot on x=1 and x =4?
Actually, to find the points that you want, you can use the ismember function as show below.
a=1:5;
c=[1 4];
hold on
plot(a(~ismember(a,c)),'ro') %values of a that DO NOT match the extra entry
plot(a(ismember(a,c)),'rx') %values of a that match the extra entry
I'm not 100% clear if it is this what you want. You can give some comments and I (or someone else) can give you a better answer.

Matlab code for definite positive -1/1 matrix

Could anybody tell me how to generate a random sign (-1/1) definite positive matrix in Matlab ?
Update: Thanks to all who replied, that was very helpful
I am experimenting compressed sensing using l1 Magic with different sensing matrices, Gaussian worked well but with Bernoulli, l1 Magic gives me an "matrix must be definite positive" error that's why I was asking my question
A really good answer would require more knowledge about the exact requirements and context. From what I've read:
What you're asking for may be doable for non-symmetric matrices
As horchler pointed out,
A = [1, 0, 0
0, 1, 0
-1, 1, 1];
has all positive eigenvalues, hence is positive definite.
How to find these efficiently for large sized matrices seems to me a non-trivial problem, but I don't really know.
What you're asking for does not appear possible for symmetric matrices
Restricting entries to the set {-1,1}, there are NO 2x2 or 3x3 or 4x4 or 5x5 or 6x6 positive definite matrices.
Restricting entries to the set {-1, 0, 1}, the ONLY positive definite matrices that I've found, by enumerating all possibilities, are the identity matrix! I'd conjecture it's impossible for any size matrix, but I don't know for sure.
Brute force enumeration of 2x2 symmetric matrices:
[-1, -1 eigenvalues -2, 0
-1, -1]
[-1, -1 eigenvalues -1.4, 1.4
-1, 1]
[-1, 1 eigenvalues -2, 0
1, -1]
[-1, 1 eigenvalues -1.4, 1.4
1, 1]
[1, 1 eigenvalues 0, 2
1, 1]
[1, 1 eigenvalues -1.4, 1.4
1, -1]
[1, -1 eigenvalues 0, 2
-1, 1]
[1, -1 eigenvalues -1.4, 1.4
-1, -1]

FLANN in matlab returns different distance from my own calculation

I'm using FLANN in matlab and using SIFT feature descriptor as my data. There is a function:
[result, ndists] = flann_search(index, testset, ...);
Here the index is built with kd-tree. The "user manual" said result returns the nearest neighbors of the samples in testset, and ndists contains the corresponding distances between the test samples and the nearest neighbors. I used the euclidean distance and found that the distances in ndists are different from that of computed by the orignal data. And even more strange, all the numbers in ndists are integers, which is often not possible for euclidean distance. Can you help me to explain this?
FLANN by default returns squared euclidean distance (x12 + ... + xn2). You can change the used metric with flann_set_distance_type(type, order) (see manual).
An example:
from pyflann import *
import numpy as np
dataset = np.array(
[[1., 1, 1, 2, 3],
[10, 10, 10, 3, 2],
[100, 100, 2, 30, 1]
])
testset = np.array(
[[1., 1, 1, 1, 1],
[90, 90, 10, 10, 1]
])
result, dists = FLANN().nn(
dataset, testset, 1, algorithm="kmeans", branching=32, iterations=7, checks=16)
Output:
>>> result
array([0, 2], dtype=int32)
>>> dists
array([ 5., 664.])
>>> ((testset[0] - dataset[0])**2).sum()
5.0
>>> ((testset[1] - dataset[2])**2).sum()
664.0
SIFT features are integers so the resulting distances are also integers in case of the squared euclidean distance.