NetworkXError: Node 8 has no position - ipython

I just started programming and recently I'm working in ipython notebook with networkx. This code below is working perfectly if you run it, but if you uncomment #G.add_edge(2, 8, egdes=6) it gives you the error NetworkXError: Node 8 has no position. Why does it only work until the sixth node?
import networkx as nx
import matplotlib.pyplot as plt
import pylab
%matplotlib inline
pos=nx.spring_layout(G)
G = nx.Graph()
G.add_edge(1, 2, egdes=1)
G.add_edge(1, 3, egdes=2)
G.add_edge(1, 4, egdes=3)
G.add_edge(1, 5, egdes=4)
G.add_edge(1, 6, egdes=5)
#G.add_edge(2, 8, egdes=6)
nx.draw(G,pos)
edge_labels=dict([((fe,se,),e['egdes'])
for fe,se,e in G.edges(data=True)])
nx.draw_networkx_edge_labels(G,pos,edge_labels)
pylab.show()
I hope one of you guys can help me, thanks in advance!

You need to create the node positions
pos=nx.spring_layout(G)
after you have built the graph (added all edges and nodes) and before you draw it.

Related

TF Keras code adaptation from python2.7 to python3

I am working to adapt a python2.7 code that uses keras and tensorflow to implement a CNN but looks like the keras API has changed a little bit since when the original code was idealized. I keep getting an error about "Negative dimension after subtraction" and I can not find out what is causing it.
Unfortunately I am not able to provide an executable piece of code because I was not capable of make the original code works, but the repository containing all the source files can be found here.
The piece of code:
from keras.callbacks import EarlyStopping
from keras.layers.containers import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Reshape, Flatten, Dropout, Dense
from keras.layers.embeddings import Embedding
from keras.models import Graph
from keras.preprocessing import sequence
filter_lengths = [3, 4, 5]
self.model = Graph()
'''Embedding Layer'''
self.model.add_input(name='input', input_shape=(max_len,), dtype=int)
self.model.add_node(Embedding(
max_features, emb_dim, input_length=max_len), name='sentence_embeddings', input='input')
'''Convolution Layer & Max Pooling Layer'''
for i in filter_lengths:
model_internal = Sequential()
model_internal.add(
Reshape(dims=(1, self.max_len, emb_dim), input_shape=(self.max_len, emb_dim))
)
model_internal.add(Convolution2D(
nb_filters, i, emb_dim, activation="relu"))
model_internal.add(
MaxPooling2D(pool_size=(self.max_len - i + 1, 1))
)
model_internal.add(Flatten())
self.model.add_node(model_internal, name='unit_' + str(i), input='sentence_embeddings')
What I have tried:
m = tf.keras.Sequential()
m.add(tf.keras.Input(shape=(max_len, ), name="input"))
m.add(tf.keras.layers.Embedding(max_features, emb_dim, input_length=max_len))
filter_lengths = [ 3, 4, 5 ]
for i in filter_lengths:
model_internal = tf.keras.Sequential(name=f'unit_{i}')
model_internal.add(
tf.keras.layers.Reshape(( 1, max_len, emb_dim ), input_shape=( max_len, emb_dim ))
)
model_internal.add(
tf.keras.layers.Convolution2D(100, i, emb_dim, activation="relu")
)
model_internal.add(
tf.keras.layers.MaxPooling2D(pool_size=( max_len - i + 1, 1 ))
)
model_internal.add(
tf.keras.layers.Flatten()
)
m.add(model_internal)
I do not expect a complete solution, what I am really trying to understand is what is the cause to the following error:
Negative dimension size caused by subtracting 3 from 1 for '{{node conv2d_5/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 200, 200, 1], use_cudnn_on_gpu=true](Placeholder, conv2d_5/Conv2D/ReadVariableOp)' with input shapes: [?,1,300,200], [3,3,200,100].

t-test using the scipy library in Python

Which of the following library is used to run a Student's t-test using the scipy library in Python.
?
a = [15, 12, 7, 98]
b = [2, 20, 8, 28]
stat, p = ttest_ind(a, b)
print(stat,p)
Options:
from scipy.ttest_ind import ttest_ind
from student.ttest_ind import ttest_ind
from scipy.statistics import ttest_ind
from scipy.ttest_ind import ttest_ind

networkx: remove edge with specific attribute from multigraph

I'd like to remove a specific edge (specific color) from a MultiGraph.
How can I do that?
Following code does not work.
#!/usr/bin/env python
import matplotlib.pyplot as plt
import networkx as nx
G = nx.MultiGraph()
# the_colored_graph.add_edge(v1, v2, "red")
G.add_edge(1, 2, color="red")
G.add_edge(2, 3, color="red")
G.add_edge(4, 2, color="green")
G.add_edge(2, 4, color="blue")
print (G.edges(data=True))
# G.remove_edge(2, 4, color="green")
#
selected_edge = [(u,v) for u,v,e in G.edges(data=True) if e['color'] == 'green']
print (selected_edge)
G.remove_edge(selected_edge[0][0], selected_edge[0][1])
print (G.edges(data=True))
nx.draw(G)
plt.show()
When constructing the multigraph, assign a "key" attribute to each edge (the key could be anything that disambiguates the parallel edges - say, the color):
G.add_edge(1, 2, color="red", key='red')
Remove an edges by specifying the end nodes and the key:
G.remove_edge(1, 2, key='red')

Resize screenshot with mss for better reading with pytesseract

I need to resize an screenshot taken by mss in order to get better reading by pytesseract and i get it done with pil+pyscreenshot but can't get it to with mss.
from numpy import array, flip
from mss import mss
from pytesseract import image_to_string
from time import sleep
def screenshot():
cap = array(mss().grab({'top': 171, 'left': 1088, 'width': 40, 'height': 17}))
cap = flip(cap[:, :, :3], 2)
return cap
def read(param):
tesseract = image_to_string(param)
return tesseract
while True:
print(read(screenshot()))
sleep(2)
here its working with pyscreenshot
from time import sleep
from PIL import Image, ImageOps
import pyscreenshot as ImageGrab
import pytesseract
while 1:
test = ImageGrab.grab(bbox=(1088,171,1126,187))
testt = ImageOps.fit(test, (50, 28), method=Image.ANTIALIAS)
testt.save('result.png')
read = pytesseract.image_to_string(testt)
print(read)
sleep(2)
And, i don't care about maintain aspect radio, works better that way with pytesseract.

How do you make NLTK draw() trees that are inline in iPython/Jupyter

For Matplotlib plots in iPython/Jupyter you can make the notebook plot plots inline with
%matplotlib inline
How can one do the same for NLTK draw() for trees? Here is the documentation http://www.nltk.org/api/nltk.draw.html
Based on this answer:
import os
from IPython.display import Image, display
from nltk.draw import TreeWidget
from nltk.draw.util import CanvasFrame
def jupyter_draw_nltk_tree(tree):
cf = CanvasFrame()
tc = TreeWidget(cf.canvas(), tree)
tc['node_font'] = 'arial 13 bold'
tc['leaf_font'] = 'arial 14'
tc['node_color'] = '#005990'
tc['leaf_color'] = '#3F8F57'
tc['line_color'] = '#175252'
cf.add_widget(tc, 10, 10)
cf.print_to_file('tmp_tree_output.ps')
cf.destroy()
os.system('convert tmp_tree_output.ps tmp_tree_output.png')
display(Image(filename='tmp_tree_output.png'))
os.system('rm tmp_tree_output.ps tmp_tree_output.png')
Little slow, but does the job. If you're doing it remotely, don't forget to run your ssh session with -X key (like ssh -X user#server.com) so that Tk could initialize itself (no display name and no $DISPLAY environment variable-kind of error)
UPD: it seems like last versions of jupyter and nltk work nicely together, so you can just do IPython.core.display.display(tree) to get a nice-looking tree-render embedded into the output.
2019 Update:
This runs on Jupyter Notebook:
from nltk.tree import Tree
from IPython.display import display
tree = Tree.fromstring('(S (NP this tree) (VP (V is) (AdjP pretty)))')
display(tree)
Requirements:
NLTK
Ghostscript