Can't display correct bipartite graph with networkx - networkx

I am trying to draw a bipartite graph with nodes on left colored differently than nodes on right. I am using networkx and matplotlib to do so.
Given a bipartite graph [(1, 3), (2, 5), (3, 4)], I wish to display [1,2,3], colored blue on one side and [4,5] colored aqua on other side, with the edges (1, 3), (2, 5), (3, 4) in between.
The following is my code.
import networkx as nx
import matplotlib.pyplot as plt
def draw_bipartite(edges_list):
left,right = set(),set()
for s,t in edges_list:
right.add(s)
left.add(t)
B = nx.Graph()
B.add_nodes_from(list(right), bipartite=0)
B.add_nodes_from(list(left), bipartite=1)
B.add_edges_from(edges_list)
nodecolor = []
for node in B.nodes():
a = 'blue' if node in list(right) else 'aqua'
nodecolor.append(a)
l,r = nx.bipartite.sets(B)
pos = {}
pos.update((node, (1, index)) for index, node in enumerate(l))
pos.update((node, (2, index)) for index, node in enumerate(r))
nx.draw(B, pos=pos,with_labels = True,node_color=nodecolor)
plt.show()
draw_bipartite([(1, 3), (2, 5), (3, 4)])
In output, the group [1,2,3] does not remain on left side, how do I keep it left side, as well as colored blue?

update
Looking at your code again, I see that only the color is determined by what you call left and right. The positions are determined according to what you call l and r. You have a bug in how you handle left and right and in how you handle l and r, so I'm addressing each of these separately.
So let's look at the color first: when you define left and right there is an error conceptually.
You are putting node 3 on both the left and the right. Your edge (1,3) puts 3 on the left while (3,4) puts it on the right. So the color that node 3 ends up with is 'blue' because you've put it in right. Since 4 was not put in right, it ends up 'aqua'. Note, it's confusing that what's on the left of a tuple ends up in right and you tell it to color right with blue, while saying in your explanation of the error that you want the left of your plot to be blue. You should make this all consistent as it's going to be difficult to avoid making more errors in the future.
Now let's look at node positions: why [1,2,3] does not end up on the left. You've defined l and r rather than taking the left and right you defined earlier (I don't think this was a good idea - use just one definition, or else you're just making it harder to hunt down bugs). You've used l,r = nx.bipartite.sets(B) There are a few issues here. First, the nodes 2 and 5 could end up on either side. It's an arbitrary choice that is determined by the unpredictable order in which python loops through keys in a dict. So you got lucky that 2 ended up in l. Similarly, node 1 could have easily ended up in r.
The expectation that 3 would end up on the same side as 1 is hopeless - the networkx algorithm bipartite.sets separates nodes based on whether they share edges or not. Since the edge (1,3) is in your graph, they will not end up on the same side. If you had used your previously defined left and right, then 1 would have been on the right, and 3 would have ended up on either the left or the right, depending on the order of your two pos.update commands because it is in both left and right.

Related

Strange shading behaviour with normal maps in UE4

I've been having some very strange lighting behaviour in Unreal 4. In short, here's what I mean:
Fig 1, First, without any normal mapping on the bricks.
Fig 2, Now with a normal map applied, generated based on the same black-and-white brick texture.
Fig 3, The base pixel normals of the objects in question.
Fig 4, The generated normals which get applied.
Fig 5, The material node setup which produces the issue, as shown in Fig 2
As you can see, the issue occurs when using the generated HeightToNormalSmooth node. As shown, this is not an issue relating to object normals (see Fig 3) or to a badly exported normal map (as there isn't one in the traditional sense), nor is it an issue with the HeightToNormalSmooth node itself (Fig 4 shows that it generates the correct bump normals).
To be clear, the issue here is the fact that using a normal texture at all (this issue occurs across all my materials) causes the positive Y facing faces of an object to turn completely black (or it seems, to become purely reflections-based, as increasing roughness on the material causes the black faces to become less 'shiny' looking).
This is really strange, I've tested with multiple different skylight setups, sun directions, and yet this always happens (even when lit directly), but only on +Y aligned faces.
If anyone can offer insight that would be greatly appreciated.
You're subtracting what looks like 1 from the input that then goes into multiply by 1, if I'm correct. This will, in most cases, make any image return black. This is because in UE4 and many other programs, colors in an image are determined by decimals of Red Green and Blue. These decimals fall in a range of 0 to 1. This means if I wanted to make red, I could use these values- R = 1 G = 0 B = 0. This matters because if R = 0 G = 0 B = 0, the result is black. When you use a multiply node in your example, what you are doing is having UE4 take each pixel of the image you fed into the node (if it was white, R = 1 G = 1 B = 1) and multiply its R, G, and B values by that number. Since zero multiplied by a number equals zero, all the pixels in the image are being set to have values of R = 0, G = 0, and B = 0. thus, all zeros, and you get black.
I also noticed you then multiplied it by one, which in most cases won't do a whole lot, since you're just multiplying the input by 1. If your input is 0, (black), multiplying it by one won't change it, cause 0 * 1 still equals 0.
To fix your issue, try changing the value you subtract from your input to be something smaller than one, say a decimal, such as 0.6 or 0.5
So I've discovered why this was an issue. Turns out there's a little option in the material settings called 'Tangent Space Normal'. This is on by default ('for convenience'), disabling this appears to completely fix the issue with the generated normal maps produced by HeightToNormalSmooth.

How to quantify line shape with ImageJ macro

I would like to quantify the shape of a line on the wings of butterflies which can vary from quite straight to squiggly similar to the horizon in a landscape, or similar to a graph (per each x value there is only 1 y value), although overall orientation varies. My idea is to use the free hand tool to trace the line of interest and then let an ImageJ macro quantify it (automating this may be tricky because there are many line-like structures). Two traits seem useful to me;
the proportion between the length of the drawn line and the straight line between the end points.
'Dispersion' of the line such as calculated in the Directionality plugin.
Other traits such as what proportion of the line is below or under the straight line that connects the extremes may also be useful.
How can this be coded? I am building an interactive macro that prompts the measuring of various traits for an open image.
Hopefully the below (non-functional) code will convey what I am trying to do.
//line shape analysis
run("Select None");
setTool("free hand");
waitForUser("Trace the line between point A and B");
length= measure();
String command = "Directionality";
new PlugInFilterRunner(da, command, "nbins=60, start=-90, method=gradient");
get data...
//to get distance between points A and B
run("Select None");
setTool("multipoint");
waitForUser("Distances","Click on points A and B \nAfter they appear, you can click and drag them if you need to readjust.");
getSelectionCoordinates(xCoordinates, yCoordinates);
xcoordsp = xCoordinates;
ycoordsp = yCoordinates;
makeLine(xcoordsp[0], ycoordsp[0], xcoordsp[1], ycoordsp[1]);
List.setMeasurements;
StrLength = List.getValue("Length");
I have looked online for solutions but found surprisingly little about this relatively simple issue.
warm regards,
Freerk
Here is a simple solution to determine to what extent the line deviates from a straight line between pint A and B. The 'straightness' is the proportion between the two measures.
// To meausure line length to compare to length of straight line aka Euclidean distance
run("Select None");
setTool("polyline");
waitForUser("Trace the line between point V5 and V3 by clickinmg at each corner finish by double click"); // Points V5 and V3 refer to point A and B It can be adjusted
run("Measure");
getStatistics(Perim);
FLR=Perim; // FLR For forewing lenght real
// to get Euclidian distance between points A and B
run("Select None");
setTool("multipoint");
waitForUser("Distances","Click on points A and B \nAfter they appear, you can click and drag them if you need to readjust.");
getSelectionCoordinates(xCoordinates, yCoordinates);
xcoordsp = xCoordinates;
ycoordsp = yCoordinates;
makeLine(xcoordsp[0], ycoordsp[0], xcoordsp[1], ycoordsp[1]);
List.setMeasurements;
FLS = List.getValue("Length"); // FLS For forewing length straight
}
I would still be grateful for more sophistcated line parameters

Location based segmentation of objects in an image (in Matlab)

I've been working on an image segmentation problem and can't seem to get a good idea for my most recent problem.
This is what I have at the moment:
Click here for image. (This is only a generic example.)
Is there a robust algorithm that can automatically discard the right square as not belonging to the group of the other four squares (that I know should always be stacked more or less on top of each other) ?
It can sometimes be the case, that one of the stacked boxes is not found, so there's a gap or that the bogus box is on the left side.
Your input is greatly appreciated.
If you have a way of producing BW images like your example:
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
xpos = centroids(:,1); should then be the x-positions of the boxes.
From here you have multiple ways to go, depending on whether you always have just one separated box and one set of grouped boxes or not. For the "one bogus box far away, rest closely grouped" case (away from Matlab, so this is unchecked) you could even do something as simple as:
d = abs(xpos-median(xpos));
bogusbox = centroids(d==max(d),:);
imshow(BW);
hold on;
plot(bogusbox(1),bogusbox(2),'r*');
Making something that's robust for your actual use case which I am assuming doesn't consist of neat boxes is another matter; as suggested in comments, you need some idea of how close together the positioning of your good boxes is, and how separate the bogus box(es) will be.
For example, you could use other regionprops measurements such as 'BoundingBox' or 'Extrema' and define some sort of measurement of how much the boxes overlap in x relative to each other, then group using that (this could be made to work even if you have multiple stacks in an image).

How to remove circles from binary image but keep lines?

In the following image, how does one remove the circles in order to keep only the lines?
Do a morphological opening with an adequate structuring element:
Opening[f, DiskMatrix[7]]
To do the entire task there are a couple of approaches, some starting by subtracting the input image from the previous result (which you might experiment), as well distinct ones. One of these distinct ones start by thinning the input image, which reduces the circles -- that are not overlapping with lines -- to single pixels (or close to that, given the circles are not perfect) which you can remove easily. Then you prune this image and detect lines (following image at right).
f = ImageCrop[Binarize[Import["http://i.stack.imgur.com/AurlZ.png"]]] (* Input *)
g = SelectComponents[Thinning[f], "Count", #1 > 10 &] (* Second image *)
h = Pruning[g, 9];
lines = ImageLines[h, 0.1, Method -> "RANSAC", Segmented -> True];
Show[Dilation[h, 3], Graphics[{Thick, Red, Line /# lines}]] (* Third image *)
You can try complementing the red lines in a given connected component by considering the detected circles in the first image shown together with the orientation of the segments that are close to a given circle.
use a circular Hough transform to detect circles, and then you can delete them. The file exchange has several files you can use, for example this one, or this one. Matlab also offers a tool called imfindcircles that does the same thing.

Morphological separation of two connected boundaries

I've got a question regarding the following scenario.
As I post-process an image, I gained a contour, which is unfortunately twice connected as you can see at the bottom line. To make it obvious what I want is just the outter line.
Therefore I zoomed in and marked the line, i want of the large image.
What I want from this selection is only the outter part, which I've marked as green in the next picture. Sorry for my bad drawing skills. ;)
I am using MatLab with the IPT. So I also tried to make out with bwmorph and the hbreak option, but it threw an error.
How do I solve that problem?
If you were successful could you please tell me a bit more about it?
Thank you in advance!
Sincerely
It seems your input image is a bit different than the one you posted, since I couldn't directly collect the branch points (there were too many of them). So, to start handling your problem I considering a thinning followed by branch point detection. I also dilate them and remove from the thinned image, this guarantees that in fact there is no connection (4 or 8) between the different segments in the initial image.
f = im2bw(imread('http://i.imgur.com/yeFyF.png'), 0);
g = bwmorph(f, 'thin', 'Inf');
h = g & ~bwmorph(bwmorph(g, 'branchpoints'), 'dilate');
Since h holds disconnected segments, the following operation collects the end points of all the segments:
u = bwmorph(h, 'endpoints');
Now to actually solve your problem I did some quick analysis on what you want to discard. Consider two distinct segments, a and b, in h. We say a and b overlap if the end points of one is contained in the other. By contained I simply mean if the starting x point of one is smaller or equal to the other, and the ending x point is greater or equal too. In your case, the "mountain" overlaps with the segment that you wish to remove. To determine each of them you remove, consider their area. But, since these are segments, area is a meaningless term. To handle that, I connected the end points of a segment, and used as area simply the interior points. As you can clearly notice, the area of the overlapped segment at bottom is very small, so we say it is basically a line and discard it while keeping the "mountain" segment. To do this step the image u is of fundamental importance, since with it you have a clear indication of where to start and stop tracking a contour. If you used the image has is , you would have trouble determining where to start and stop collecting the points of a contour (i.e., the raster order would give you incorrect overlapping indication).
To reconstruct the segment as a single one (currently you have three of them), consider the points you discarded from g in h, and use those that doesn't belong to the now removed bottom segment.
I'd also use bwmorph
%# find the branch point
branchImg = bwmorph(img,'branchpoints');
%# grow the pixel to 3x3
branchImg = imdilate(branchImg,ones(3));
%# hide the branch point
noBranchImg = img & ~branchImg;
%# label the three lines
lblImg = bwlabel(noBranchImg);
%# in the original image, mask label #3
%# note that it may not always be #3 that you want to mask
finalImg = img;
finalImg(lblImg==3) = 0;
%# show the result
imshow(finalImg)