How to serve a pyvis graph with fastapi? - networkx

I wonder what is the best way to serve a pyvis network with FastAPI?
This iteractive graph was generated with pyvis and is stored in an html file and then displayed as an interactive tool in JupyterLab. I would like to serve it as part of a molecular graph exploration tool. I wonder what is the best way to do so?
import networkx
import obonet
G = obonet.read_obo('https://ftp.ebi.ac.uk/pub/databases/chebi/ontology/chebi_core.obo')
# Create smaller subgraph
H = nx.ego_graph(G, 'CHEBI:17710', 2)
H.nodes['CHEBI:24669']
from pyvis.network import Network
nt = Network('1500px', '1500px', notebook=True)
nt.from_nx(H)
nt.show('H.html')

Related

How to save models in pytorch from tpu to cpu

I am training a Neural Network model with pytorch.
Since this model is very complicated, I made use of the pytorch_xla package to use TPU. I finished training the model and now I want to save the weight so I will be able to use them from any envrionment.
I tried to save the data like so
file_name = "model_params"
torch.save(model.state_dict(), file_name)
and when I tried to load them (from environment which does not support TPU)
model.load_state_dict(torch.load(file name))
I got the following error
NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'XLA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
Is there a way to do what I want?

How does one read multiple DICOM and PNG files at once using pydicom.read_file() and cv2.imread()?

Currently working on a Fully CNN for renal segmentation in MR images. Have 40 images and their ground truth labels, attempting to load all of the images for pre-processing purposes.
Using Google Colab, with the latest versions of pydicom and pip installed, for this project. Currently have the Google Drive mounted to the Colab program and the code below shows the correct pathways to the images and their masks in the pydicom.read_file() and cv2.imread() calls, respectively.
However, when I use the "/../IMG*.dcm" or "/../IMG*.png" file paths (which should be legal?), I receive a "FileNotFoundError" as listed below. But, when I specify a specific .dcm or .png image, the pydicom.read_file() and cv2.imread() calls function quite normally.
Any suggestions on how to resolve this issue? I am struggling a lot with loading the data and pre-processing but have the model architecture ready to go once these preliminary hurdles are overcome.
#import data as data
import pydicom
import numpy as np
images= pydicom.read_file("/content/drive/My Drive/CHOAS_Kidney_Labels/Training_Images/T1DUAL/IMG*.dcm");
numpyArray = images.pixel_array
masks= cv2.imread("/content/drive/My Drive/CHOAS_Kidney_Labels/Ground_Truth_Training/T1DUAL/IMG*.png");
-----> FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/My Drive/CHOAS_Kidney_Labels/Training_Images/T1DUAL/IMG*.dcm'
pydicom.read_file does not support wildcards. You have to iterate over the files yourself, something like (untested):
import glob
import pydicom
pixel_data = []
paths = glob.glob("/content/drive/My Drive/CHOAS_Kidney_Labels/Training_Images/T1DUAL/IMG*.dcm")
for path in paths:
dataset = pydicom.dcmread(path)
pixel_data.append(dataset.pixel_array)

Horovod Timeline and MPI Tracing in Azure Machine Learning Workspace(MPI Configuration)

All,
I am trying to train a distributed model using Horovod on Azure Machine Learning Service as shown below.
estimator = TensorFlow(source_directory=script_folder,
entry_script='train_script.py',
script_params=script_params,
compute_target=compute_target_gpu_4,
conda_packages=['scikit-learn'],
node_count=2,
distributed_training=MpiConfiguration(),
framework_version = '1.13',
use_gpu=True
)
run = exp.submit(estimator)
How to enable Horovod timeline?
How to enable more detailed MPI tracing to see the communication between the nodes?
Thanks.
The following uses the Tensorflow Estimator class in the SDK, that distributed_training is set to Mpi().
Another sample using Horovod to train a genism sentence similarity model.
https://github.com/microsoft/nlp-recipes/blob/46c0658b79208763e97ae3171e9728560fe37171/examples/sentence_similarity/gensen_train.py

Where is the high-level charts in bokeh's last version (1.0.3)?

I hava a quick question:
I see that you have the high-level charts in the version 0.11.0:
http://docs.bokeh.org/en/0.11.0/docs/user_guide/charts.html
But I can't find the same topic in the last version (1.0.3)? Did bokeh team remove it?
I have been working on adapting histograms to my project but can't find the histograms section in the last version?
Any ideas?
The bokeh.charts API was deprecated and scheduled to be removed in version 1.0 several years ago. However, even that schedule was also eventually accelerated, due to lack of interest and better alternatives. It has been completely gone from the project since late 2017.
For very high level APIs on top of Bokeh, consider Chartify or Holoviews. Or just create histograms directly with the stable bokeh.plotting API. e.g.
from bokeh.plotting import figure, output_file, show
from bokeh.sampledata.autompg import autompg as df
from numpy import histogram
p = figure(plot_height=300)
hist, edges = histogram(df.hp, density=True, bins=20)
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="white")
show(p)

How can I create in Gehpi directed tree graph instead of sphererical

I want to make a network graph which shows the distribution of our documents in our folder structure.
I have the nodefile, edgefile and gephi graph file in this location:
https://1drv.ms/f/s!AuVfRBdVHkO7hgs5K9r9f7jBBAUH
What I do is:
Run the algorithm ForceAtlas2 with scaling 10-20, dissuade hub marked and prevent overlap marked, all other standard setting.
What I get is a graph with groups radial/spherical distributed. However, what I want is a tree directed network graph.
Anyone know how I can adjust Gephi to make this?
Thanks!
I just found a solution.
I tested the file format as shown on the Yed site "import excel file" page
http://yed.yworks.com/support/manual/import_excel.html
This gave me the Yed import dialog (took a life time to figure out that it's a pop up menu and not selectable through the standard menu)
Anyway, it worked and I've adjusted the test files with the data prepared for the Gehpi. This was pretty easy, I could used the source target ID's etc. Just copy paste.
I load it into Yed and used some directed and radial clustering algorithms on it. Works fine!
Below you can find the excel node/edge file used to import in Yed and the graph file you can open with Yed to see the final radial result.
https://1drv.ms/f/s!AuVfRBdVHkO7hg6DExK_eVkm5_mR
Only thing to figure out is how to combine the weight (which represents the number of documents) with the node size.
Unfortunately, as of version 0.9.0, Gephi no longer supports hierarchical graphs. Maybe try using a previous version?
Other alternatives involve more complex software, such as Graphviz, but you need a .dot file instead of your .csv. I looked all over, but could not find an easy-to-use csv to dot converter.
You could try looking at d3-hierarchy, a node.js program, but then again you need to use the not-so-user-friendly npm. If you look at the link, it looks like it can produce the kind of diagram you're looking for.