I'm working on a project in which first i had to detect the shortest path in a huge network graph using a-star algorithm followed by visualizing the same graph using pyvis network. However in this pyvis network the path that I've calculated should be highlighted as shortest path.
eg: consider this code for game of thrones character network
from pyvis.network import Network
import pandas as pd
got_net = Network(height='750px', width='100%', bgcolor='#222222', font_color='white')
# set the physics layout of the network
got_net.barnes_hut()
got_data = pd.read_csv('https://www.macalester.edu/~abeverid/data/stormofswords.csv')
sources = got_data['Source']
targets = got_data['Target']
weights = got_data['Weight']
edge_data = zip(sources, targets, weights)
for e in edge_data:
src = e[0]
dst = e[1]
w = e[2]
got_net.add_node(src, src, title=src)
got_net.add_node(dst, dst, title=dst)
got_net.add_edge(src, dst, value=w)
neighbor_map = got_net.get_adj_list()
# add neighbor data to node hover data
for node in got_net.nodes:
node['title'] += ' Neighbors:<br>' + '<br>'.join(neighbor_map[node['id']])
node['value'] = len(neighbor_map[node['id']])
got_net.show('gameofthrones.html')
Now how do i highlight a specific path in this graph? i've gone through the documentation but there isn't anything similar
Here's an example using NetworkX to create the graph and gravis to visualize it. I had to use a different URL, hope it's the same data. I've used the weight as edge widths and colored some with large weights. Alternatively you can calculate a shortest path between two nodes of interest and then color that path or assign edge widths so it stands out.
Disclosure: I'm the author of gravis. I don't know if the same can be achieved with pyvis, but since I know that gravis supports the requirements well, I provided this solution and hope it's useful.
import gravis as gv
import networkx as nx
import pandas as pd
url = 'https://raw.githubusercontent.com/pupimvictor/NetworkOfThrones/master/stormofswords.csv'
got_data = pd.read_csv(url)
g = nx.Graph()
for i, (source, target, weight) in got_data.iterrows():
width = weight/10
g.add_edge(source, target, size=width, color='blue' if width > 3 else 'black')
gv.d3(g)
Edit: Here's the output if you use this code inside a Jupyter notebook. You can also use a regular Python interpreter and display the plot inside a browser window that pops up with fig = gv.d3(g) followed by fig.display().
Related
I am trying to count the instance of the vehicle in each image in KITTI-360 instance segmented dataset. For a trial, I first tried to do it on the single image. But I am getting only one instance value when I run my code. Which means that all the instances of the vehicle class are denoted by only one value in the image. I have attached the code that I used for finding this below.
I want to know why this is? or if I am doing something wrong in my code?
"""
This file is for the verification of the instance confirmation for the pixel values
"""
This file is for the verification of the instance confirmation for the pixel values
"""
#Imports
import os
import numpy as np
import cv2
import json
# Import image from the file location
CWD = os.getcwd()
print(CWD)
instance_folder = os.path.join(CWD, 'image_my_data', "instance")
print(instance_folder)
instance_image_path = os.path.join(instance_folder, "0000004402.png")
print(instance_image_path)
instance_image_array = cv2.imread(instance_image_path)
# print the size of the image for reference
print(instance_image_array.shape)
# Following are pixel values are measured and wanted to see what are the instance values at these pixel locations.
# Pixel location as tuples
pixel_location_1 = (210, 815)
pixel_location_2 = (200, 715)
# print the pixel location, for the above values
print('pixel values at (210, 815)', instance_image_array[pixel_location_1[0], pixel_location_1[1]])
print('pixel values at (200, 715)', instance_image_array[pixel_location_2[0], pixel_location_2[1]])
Note: the values of the pixels that I have taken above I choose by opening the image in paint and noting down the pixel coordinates in x and y in any locations where I can physically see that the two separate instances of the class are present.
Hope someone is able to help me with this.
I found the answer to my own question. The easiest way to find the instance in an image is to read the image using the cv2.imread(image, cv2.IMREAD_ANYDEPTH)
The reason for doing this is, the KITTI-360 images are 8 bit images. So, we can use the regular imread for reading the image as a RGB image but that will not give the correct instance ids. When using the method above will convert the image into a single channel read and that single channel will contain the instance ids of each object.
I hope this helps someone else.
Have 40 DICOM and 40 PNG images (data and their masks) for a Fully CNN that are loaded into my Google Drive and have been found by the notebook via the print(os.listdir(...)), as evidenced below in the first block of code where all the names of the 80 data in the above sets are listed.
Also have globbed all of the DICOM and PNG into img_path and mask_path, both with lengths of 40, in the second block of code that is below.
Now attempting to resize all of the images to 256 x 256 before inputting them into the U-net like architecture for segmentation. However, cannot load them via the nib.load() call, as it cannot work out the file type of the DCM and PNG files, even though for the latter it will not error but still produce an empty set of data like the last block of code yields.
Assuming that, once the first couple of lines inside the for loop in the third block of code are rectified, pre-processing should be completed and I can move onto the U-net implementation.
Have the current pydicom running in the Colab notebook and tried it in lieu of the nib.load() call, which produced the same error as the current one.
#import data as data
import pydicom
from PIL import Image
import numpy as np
import glob
import imageio
print(os.listdir("/content/drive/My Drive/Images"))
print(os.listdir("/content/drive/My Drive/Masks"))
pixel_data = []
images = glob.glob("/content/drive/My Drive/Images/IMG*.dcm");
for image in images:
dataset = pydicom.dcmread(image)
pixel_data.append(dataset.pixel_array)
#print(len(images))
#print(pixel_data)
pixel_data1 = [] ----------------> this section is the trouble area <-------
masks = glob.glob("content/drive/My Drive/Masks/IMG*.png");
for mask in masks:
dataset1 = imageio.imread(mask)
pixel_data1.append(dataset1.pixel_array)
print(len(masks))
print(pixel_data1)
['IMG-0004-00040.dcm', 'IMG-0002-00018.dcm', 'IMG-0046-00034.dcm', 'IMG-0043-00014.dcm', 'IMG-0064-00016.dcm',....]
['IMG-0004-00040.png', 'IMG-0002-00018.png', 'IMG-0046-00034.png', 'IMG-0043-00014.png', 'IMG-0064-00016.png',....]
0 ----------------> outputs of trouble area <--------------
[]
import glob
img_path = glob.glob("/content/drive/My Drive/Images/IMG*.dcm")
mask_path = glob.glob("/content/drive/My Drive/Masks/IMG*.png")
print(len(img_path))
print(len(mask_path))
40
40
images=[]
a=[]
for a in pixel_data:
a=resize(a,(a.shape[0],256,256))
a=a[:,:,:]
for j in range(a.shape[0]):
images.append((a[j,:,:]))
No output, this section works fine.
images=np.asarray(images)
print(len(images))
10880
masks=[] -------------------> the other trouble area <-------
b=[]
for b in masks:
b=resize(b,(b.shape[0],256,256))
b=b[:,:,:]
for j in range(b.shape[0]):
masks.append((b[j,:,:]))
No output, trying to solve the problem of how to fix this section.
masks=np.asarray(masks) ------------> fix the above section and this
print(len(masks)) should have no issues
[]
You are trying to load the DICOM files again using nib.load, which does not work, as you already found out:
for name in img_path:
a=nib.load(name) # does not work with DICOM files
a=a.get_data()
a=resize(a,(a.shape[0],256,256))
You already have the data from the DICOM files in the pixel_data list, so you should use these:
for a in pixel_data:
a=resize(a,(a.shape[0],256,256)) # or something similar, depending on the shape of pixel_data
...
Your last loop for mask in masks: is never executed because two lines about it you set masks = [].
It looks like it should to be for mask in mask_path:. mask_path is the list of mask file names.
I have the following code in a Jupyter notebook:
import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
N = 4000
x = np.random.random(size=N) * 100
x = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = ["#%02x%02x%02x" % (r, g, 150) for r, g in zip(np.floor(50+2*x), np.floor(30+2*y))]
output_notebook()
# Loading BokehJS ...
p = figure()
p.circle(x, y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
show(p)
However, it does not show any plot or graphics; it simply is stuck on "Loading BokehJS".
In principle, this should work with nbviewer, as rendered notebooks are stripped of all Javascript on GitHub (I think?). In my experience however, it doesn't.
GitHub scrubs all JavaScript from all Jupyter notebooks before rendering them (presumably for security reasons). Bokeh requires JavaScript code from the client library BokehJS in order to render or do anything at all. Given this, I would not expect Bokeh plots in Jupyter notebooks to ever work on GitHub, unfortunately.
I would very much like for it to be workable, but it is entirely outside our control. I have reached out to GitHub asking for an option to disable rendering entirely for notebooks in a repo, on the reasoning that "not rendering at all" is preferable to "rendering but looking broken" but have not heard back from them.
Note that nbviewer does not strip JavaScript, which is why all the notebooks at the Bokeh nbviewer.org gallery show up just fine.
I am using the VL_SLIC function in MATLAB and I am following the tutorial for the function here: http://www.vlfeat.org/overview/slic.html
This is the code I have written so far:
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10;
vl_setup;
segments = vl_slic(single(im), regionSize, regularizer);
imshow(segments);
I just get a black image and I am not able to see the segmented image with the superpixels. Is there a way that I can view the result as shown in the webpage?
The reason why is because segments is actually a map that tells you which regions of your image are superpixels. If a pixel in this map belongs to ID k, this means that this pixel belongs to superpixel k. Also, the map is of type uint32 and so when you try doing imshow(segments); it really doesn't show anything meaningful. For that image that is seen on the website, there are 1023 segments given your selected parameters. As such, the map spans from 0 to 1023. If want to see what the segments look like, you could do imshow(segments,[]);. What this will do is that the region with the ID of 1023 will get mapped to white, while the pixels that don't belong to any superpixel region (ID of 0), gets mapped to black. You would actually get something like this:
Not very meaningful! Now, to get what you see on the webpage, you're going to have to do a bit more work. From what I know, VLFeat doesn't have built-in functionality that shows you the results like what is seen on their webpage. As such, you will have to write code to do it yourself. You can do this by following these steps:
Create a map that is true that is the same size as the image
For each superpixel region k:
Create another map that marks true for any pixel belonging to the region k, and false otherwise.
Find the perimeter of this region.
Set these perimeter pixels to false in the map created in Step #1
Repeat Step #2 until we have finished going through all of the regions.
Use this map to mask out all of the pixels in the original image to get what you see in the website.
Let's go through that code now. Below is the setup that you have established:
vl_setup;
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10 ;
segments = vl_slic(single(im), regionSize, regularizer);
Now let's go through that algorithm that I just mentioned:
perim = true(size(im,1), size(im,2));
for k = 1 : max(segments(:))
regionK = segments == k;
perimK = bwperim(regionK, 8);
perim(perimK) = false;
end
perim = uint8(cat(3,perim,perim,perim));
finalImage = im .* perim;
imshow(finalImage);
We thus get:
Bear in mind that this is not exactly the same as what you get on the website. I simply went to the website and saved that image, then proceeded with the code I just showed you. This is probably because the slic_image.jpg image is not the exact original that was given in their example. There seems to be superpixels in areas where there are some bad quantization artifacts. Also, I'm using a relatively old version of VLFeat - Version 0.9.16. There may have been improvements to the algorithm since then, so I may not be using the most up to date version. In any case, this is something for you that you can start with.
Hope this helps!
I found these lines in vl_demo_slic.m may be useful.
segments = vl_slic(im, regionSize, regularizer, 'verbose') ;
% overaly segmentation
[sx,sy]=vl_grad(double(segments), 'type', 'forward') ;
s = find(sx | sy) ;
imp = im ;
imp([s s+numel(im(:,:,1)) s+2*numel(im(:,:,1))]) = 0 ;
It generates edges from the gradient of the superpixel map (segments).
While I want to take nothing away from ~ rayryeng's ~ beautiful answer.
This could also help.
http://www.vlfeat.org/matlab/demo/vl_demo_slic.html
Available in: toolbox/demo
I am generating random Geometric graph using networkx. I am exporting all the node and edges information into file.
I want to generate the same graph by importing all the node and edges information from file.
Code to export the node values and edge information.
G=nx.random_geometric_graph(10,0.5)
filename = "ipRandomGrid.txt"
fh=open(filename,'wb')
nx.write_adjlist(G, fh)
nx.draw(G)
plt.show()
I am trying to export it with below code and trying to change the color of some nodes. But it is generating different graph.
filename = "ipRandomGrid.txt"
fh=open(filename, 'rb')
G=nx.Graph()
G=nx.read_adjlist("ipRandomGrid.txt")
pos=nx.random_layout(G)
nx.draw_networkx_nodes(G,pos,nodelist=['1','2'],node_color='b')
nx.draw(G)
plt.show()
How to generate the same graph with few changes in color of some nodes?
If I understand the problem you're having correctly, the trouble is here:
pos=nx.random_layout(G)
nx.draw_networkx_nodes(G,pos,nodelist=['1','2'],node_color='b')
nx.draw(G)
You create a random layout of the graph in the first line, and use it to draw nodes '1' and '2' in the second line. You then draw the graph again in the third line without specifying the positions, which uses a spring model to position the nodes.
Your graph has no extra nodes, you've just drawn two of them in two different positions. If you want to consistently draw a graph the same way, you need to consistently use the pos you calculated. If you want it to be the same after storing and reloading, then save pos as well.
The easiest way to store node position data for your case might be using Python pickles. NetworkX has a write_gpickle() function that will do this for you. Note that the positions are already available as node attributes when you generate a random geometric graph so you probably want to use those when drawing. Here is an example of how to generate, save, load, and draw the same graph.
In [1]: import networkx as nx
In [2]: G=nx.random_geometric_graph(10,0.5)
In [3]: pos = nx.get_node_attributes(G,'pos')
In [4]: nx.draw(G,pos)
In [5]: nx.write_gpickle(G,'rgg.gpl')
In [6]: H=nx.read_gpickle('rgg.gpl')
In [7]: pos = nx.get_node_attributes(H,'pos')
In [8]: nx.draw(H,pos)