When I call G = nx.convert_matrix.from_numpy_array(A, create_using=nx.DiGraph), where A is a 0-1 adjacency matrix, the resulting graph automatically contains edge weights of 1.0 for each edge. How can I prevent this attribute from being added?
I realize I can write
for _,_,d in G.edges(data=True):
d.clear()
but I would prefer if the attributes were not added in the first place.
There is no way to do that with native networkx functions. This is how you can do it:
G = nx.empty_graph(0, nx.DiGraph)
G.add_nodes_from(range(A.shape[0]))
G.add_edges_from(((int(e[0]), int(e[1])) for e in zip(*A.nonzero())))
This is exactly how the nx.convert_matrix.from_numpy_array function is implemented internally. I got however rid of all controls, so be careful with this. Additional details can be found here
Related
I am using keras and tensorflow 1.4.
I want to explicitly specify which neurons are connected between two layers. Therefor I have a matrix A with ones in it, whenever neuron i in the first Layer is connected to neuron j in the second Layer and zeros elsewhere.
My first attempt was to create a custom layer with a kernel, that has the same size as A with non-trainable zeros in it, where A has zeros in it and trainable weights, where A has ones in it. Then, the desired output would be a simple dot-product. Unfortunately I did not manage to figure out, how to implement a kernel that is partly trainable and partly non-trainable.
Any suggestions?
(Building a functional model with a lot of neurons that are connected by hand could be a work around, but somehow 'ugly' solution)
The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights:
class CustomConnected(Dense):
def __init__(self,units,connections,**kwargs):
#this is matrix A
self.connections = connections
#initalize the original Dense with all the usual arguments
super(CustomConnected,self).__init__(units,**kwargs)
def call(self,inputs):
#change the kernel before calling the original call:
self.kernel = self.kernel * self.connections
#call the original calculations:
super(CustomConnected,self).call(inputs)
Using:
model.add(CustomConnected(units,matrixA))
model.add(CustomConnected(hidden_dim2, matrixB,activation='tanh')) #can use all the other named parameters...
Notice that all the neurons/units have yet a bias added at the end. The argument use_bias=False will still work if you don't want biases. You can also do exactly the same thing using a vector B, for instance, and mask the original biases with self.biases = self.biases * vectorB
Hint for testing: use different input and output dimensions, so you can be sure that your matrix A has the correct shape.
I just realized that my code is potentially buggy, because I'm changing a property that is used by the original Dense layer. If weird behaviors or messages appear, you can try another call method:
def call(self, inputs):
output = K.dot(inputs, self.kernel * self.connections)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
Where K comes from import keras.backend as K.
You may also go further and set a custom get_weights() method if you want to see the weights masked with your matrix. (This would not be necessary in the first approach above)
I have an adjacency matrix adj and a cellarray nodeManes that contains names that will be given to the graph G that will be constructed from adj.
So I use G = digraph(adj,nodeNames); and I get the following graph :
Now, I want to find the strongly connected components in G and do a graph condensation so I use the following:
C = condensation(G);
p2 = plot(C);
and get this results :
So I have 6 strongly connected components, but my problem is that I lost the node names, I want to get something like:
Is that any way to get the nodes names in the result of the condentation?
I think the official documentation can take you to the right point:
Output Arguments
C - Condensation Graph
Condensation graph, returned as a digraph object. C is a directed
acyclic graph (DAG), and is topologically sorted. The node numbers in
C correspond to the bin numbers returned by conncomp.
Let's take a loot at conncomp:
conncomp(G) returns the connected components of graph G as bins. The
bin numbers indicate which component each node in the graph belongs to
Look at the examples... I think that if you use conncomp on your graph before using the condensation function, you will be able to rebuild your node names on your new graph with a little effort.
I would like to use the linkage function in matlab with a custom distance.
My distance function is in the form:
Distance = pdist(matrix,#mydistance);
so given a
matrix = rand(132,18)
Distance will be a vector [1x8646];
D_matrix = squareform(Distance,'tomatrix');
is a matrix 132x132 contaning all the pairwise distances between te rows of matrix
How can I embed mydistance in linkage?
You can use a call to linkage like this:
Z = linkage(Data,'single','#mydistance')
where 'single' can also be any of the other cluster merge methods as described here: http://www.mathworks.com/help/stats/linkage.html.
In other words, just put your function handle in a string and pass it as the 3rd argument to linkage. You cannot use the 'savememory' function in linkage while using a custom distance function, however. This is causing me some frustration with my 300,000 x 6 dataset. I think the solution will be to project it to some space where euclidean distance is defined and meaningful but we'll see how that goes.
Besides using
tree = linkage(Data,'single','#mydistance')
like Imperssonator suggests, you can also use
dissimilarity = pdist(Data,#mydistance);
tree = linkage(dissimilarity,'single');
The latter has the benefit of allowing Data to be an object array with #mydistance using objects as arguments.
let us suppose that we have following graph of singular value distribution
which was given by following command
stem(SV)
SV_singular values,from visually of course we can find approximate values of singular values,but is there any possibility to get values from graph itself?of course someone may say that if we have SV,we can directly access,but i want just graphicl tool to get it from picture itself,for example like this
b=stem(SV);
but when i type b,i am getting following number
b
b =
174.0051
it is matlab self learning,so please help me to learn how to find values from graphics in matlab
The value stored in your variable b is a handle to the current axes. You can access the properties of this axes using get. To access the values in the plot, you can use
b=stem(SV);
values = get(b, 'ydata');
Given a BW image that contains some connected components.
Then, given a single pixel P in the image. How to find which component that contains the pixel P? It is guaranteed that the pixel P is always on the white area in one of the connected components.
Currently, I use CC = bwconncomp(BW) than I iterate each component using 'for' loop. In the each component, I iterate the index pixel. For each pixels, I check whether the value equal to the (index of) pixel P or not. If I find it, I record the number of connected component.
However, it seems it is not efficient for this simple task. Any suggestion for improvement? Thank you very much in advance.
MATLAB provides multiple functions that implement connected-component in different ways.
In your example, I would suggest bwlabel.
http://www.mathworks.com/help/images/ref/bwlabel.html
[L, num] = bwlabel(imgBW) This will perform a full-image connected-component labeling on a black-and-white image.
After calling this function, the label value that pixel P belongs to can be read off the result matrix L, as in label_to_find = L(row, col) index. Simple as that.
To extract a mask image for that label, use logical(L == label_to_find).
If you use different software packages such as OpenCV you will be able to get better performance (efficiency in terms of cutting unnecessary or redundant computation), but in MATLAB the emphasis is on convenience and prototyping speed.