I've written some code to display a bipartite graph using networkx in a jupyter notebook. I've rendered the labels of the nodes using what I assume is Tex. However I, cannot seem to get the size of the rendered Tex font to increase. Is there a simple way of doing this? Help would be great.
Below is my code and an image of the bipartite graph,
import networkx as nx
% matplotlib inline
from networkx.algorithms import bipartite
B = nx.Graph()
B.add_nodes_from(['$x_1$','$x_2$','$x_3$'], s='o', c='#AA5555', bipartite=0) # Add the node attribute 'bipartite'
B.add_nodes_from(['$f_a$','$f_b$','$f_c$','$f_d$'], s='s', c='#55AAAA', bipartite=1)
B.add_edges_from([('$x_1$','$f_a$'),('$x_1$','$f_b$'),('$x_2$','$f_a$'),('$x_2$','$f_b$'),('$x_2$','$f_c$'),('$x_3$','$f_c$'),('$x_3$','$f_d$')])
pos = dict()
X, Y = bipartite.sets(B)
pos.update((n, (i,1)) for i, n in enumerate(X))
pos.update((n, (i+0.5,2)) for i, n in enumerate(Y))
disjointSetCount = 2
for disjointSet in range(0, disjointSetCount):
nx.draw(
B,
pos,
with_labels=True,
node_shape = 's' if disjointSet == 1 else 'o',
node_color = '#FFEEEE' if disjointSet==1 else '#EEEEFF',
node_size=1000,
nodelist = [
sNode[0] for sNode in filter(lambda x: x[1]["bipartite"]==disjointSet, B.nodes(data=True))
]
)
plt.savefig("img/15_Graphical_Models_12b.png") # save as png
One of the optional arguments for nx.draw is font_size. If I set font_size=100 I get
That should be plenty big.
Related
I am using the following code to run uniform filter on my data:
from scipy.ndimage.filters import uniform_filter
a = np.arange(1000)
b = uniform_filter(a, size=10)
The filter right now semms to work as if a stride was set to size // 2.
How to adjust the code so that the stride of the filter is not half of the size?
You seem to be misunderstanding what uniform_filter is doing.
In this case, it creates an array b that replaces every a[i] with the mean of a block of size 10 centered at a[i]. So, something like:
for i in range(0, len(a)): # for the 1D case
b[i] = mean(a[i-10//2:i+10//2]
Note that this tries to access values with indices outside the range 0..1000. In the default case, uniform_filter supposes that the data before position 0 is just a reflection of the data thereafter. And similarly at the end.
Also note that b uses the same type as a. In the example where a is of integer type, the mean will also be calculated at integer, which can cause some loss of precision.
Here is some code and plot to illustrate what's happening:
import matplotlib.pyplot as plt
import numpy as np
from scipy.ndimage.filters import uniform_filter
fig, axes = plt.subplots(ncols=2, figsize=(15,4))
for ax in axes:
if ax == axes[1]:
a = np.random.uniform(-1,1,50).cumsum()
ax.set_title('random curve')
else:
a = np.arange(50, dtype=float)
ax.set_title('values from 0 to 49')
b = uniform_filter(a, size=10)
ax.plot(a, 'b-')
ax.plot(-np.arange(0, 10)-1, a[:10], 'b:') # show the reflection at the start
ax.plot(50 + np.arange(0, 10), a[:-11:-1], 'b:') # show the reflection at the end
ax.plot(b, 'r-')
plt.show()
I have a set of about 33K (x,y,z) points in a csv file and would like to convert this to a grid of density values using scipy.stats.gaussian_kde. I have not been able to find a way to convert this point cloud array into an appropriate input format for the gaussian_kde function (and then take the output of this and convert it into a density value grid). Can anyone provide sample code?
Here's an example with some comments which may be of use. gaussian_kde wants the data and points to be row stacked, ie. (# ndim, # num values), as per the docs. In your case you would row_stack([x, y, z]) such that the shape is (3, 33000).
from scipy.stats import gaussian_kde
import numpy as np
import matplotlib.pyplot as plt
# simulate some data
n = 33000
x = np.random.randn(n)
y = np.random.randn(n) * 2
# data must be stacked as (# ndim, # n values) as per docs.
data = np.row_stack((x, y))
# perform KDE
kernel = gaussian_kde(data)
# create grid over which to evaluate KDE
s = np.linspace(-8, 8, 128)
grid = np.meshgrid(s, s)
# again KDE needs points to be row_stacked
grid_points = np.row_stack([g.ravel() for g in grid])
# evaluate KDE and reshape result correctly
Z = kernel(grid_points)
Z = Z.reshape(grid[0].shape)
# plot KDE as image and overlay some data points
fig, ax = plt.subplots()
ax.matshow(Z, extent=(s.min(), s.max(), s.min(), s.max()))
ax.plot(x[::10], y[::10], 'w.', ms=1, alpha=0.3)
ax.set_xlim(s.min(), s.max())
ax.set_ylim(s.min(), s.max())
I would like to perform some multivariant regression using gaussian process regression as implemented in GPflow using version 2.
Installed with pip install gpflow==2.0.0rc1
Below is some example code that generates some 2D data and then attempts to fit it with using GPR and the finally computes the difference
between the true input data and the GPR prediction.
Eventually I would like to extend to higher dimensions
and do tests against a validation set to check for over-fitting
and experiment with other kernels and "Automatic Relevance Determination"
but understanding how to get this to work is the first step.
Thanks!
The following code snippet will work in a jupyter notebook.
import gpflow
import numpy as np
import matplotlib
from gpflow.utilities import print_summary
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12, 6)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def gen_data(X, Y):
"""
make some fake data.
X, Y are np.ndarrays with shape (N,) where
N is the number of samples.
"""
ys = []
for x0, x1 in zip(X,Y):
y = x0 * np.sin(x0*10)
y = x1 * np.sin(x0*10)
y += 1
ys.append(y)
return np.array(ys)
# generate some fake data
x = np.linspace(0, 1, 20)
X, Y = np.meshgrid(x, x)
X = X.ravel()
Y = Y.ravel()
z = gen_data(X, Y)
#note X.shape, Y.shape and z.shape
#are all (400,) for this case.
# if you would like to plot the data you can do the following
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X, Y, z, s=100, c='k')
# had to set this
# to avoid the following error
# tensorflow.python.framework.errors_impl.InvalidArgumentError: Cholesky decomposition was not successful. The input might not be valid. [Op:Cholesky]
gpflow.config.set_default_positive_minimum(1e-7)
# setup the kernel
k = gpflow.kernels.Matern52()
# set up GPR model
# I think the shape of the independent data
# should be (400, 2) for this case
XY = np.column_stack([[X, Y]]).T
print(XY.shape) # this will be (400, 2)
m = gpflow.models.GPR(data=(XY, z), kernel=k, mean_function=None)
# optimise hyper-parameters
opt = gpflow.optimizers.Scipy()
def objective_closure():
return - m.log_marginal_likelihood()
opt_logs = opt.minimize(objective_closure,
m.trainable_variables,
options=dict(maxiter=100)
)
# predict training set
mean, var = m.predict_f(XY)
print(mean.numpy().shape)
# (400, 400)
# I would expect this to be (400,)
# If it was then I could compute the difference
# between the true data and the GPR prediction
# `diff = mean - z`
# but because the shape is not as expected this of course
# won't work.
The shape of z must be (N, 1), whereas in your case it is (N,). However, this is a missing check in GPflow and not your fault.
I am trying to use interpn (in python using Scipy) to replicate results from Matlab using interp3. However, I am struggling to structure my arguments. I tried the following line:
f = interpn(blur_maps, fx, fy, pyr_level)
Where blur maps is a 600 x 800 x 7 representing a grayscale image at seven levels of blur,
fx and fy are indices of the seven maps. Both fx and fy are 2d arrays. pyr_level is a 2d array that contains values from 1 to 7 representing the blur map to be interpolated.
My question is since I incorrectly arranged the arguments, how can I arrange them in a way that works? I tried to look up examples but I didn't see anything similar. Here is an example of the data I am trying to interpolate:
import numpy as np
import cv2, math
from scipy.interpolate import interpn
levels = 7
img_path = '/Users/alimahdi/Desktop/i4.jpg'
img = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2GRAY)
row, col = img.shape
x_range = np.arange(0, col)
y_range = np.arange(0, row)
fx, fy = np.meshgrid(x_range, y_range)
e = np.exp(np.sqrt(fx ** 2 + fy ** 2))
pyr_level = 7 * (e - np.min(e)) / (np.max(e) - np.min(e))
blur_maps = np.zeros((row, col, levels))
blur_maps[:, :, 0] = img
for i in range(levels - 1):
img = cv2.pyrDown(img)
r, c = img.shape
tmp = img
for j in range(int(math.log(row / r, 2))):
tmp = cv2.pyrUp(tmp)
blur_maps[:, :, i + 1] = tmp
pixelGrid = [np.arange(x) for x in blur_maps.shape]
interpPoints = np.array([fx.flatten(), fy.flatten(), pyr_level.flatten()])
interpValues = interpn(pixelGrid, blur_maps, interpPoints.T)
finalValues = np.reshape(interpValues, fx.shape)
I am now getting the following error: ValueError: One of the requested xi is out of bounds in dimension 0 I do know that the problem is in interpPoints but I am not sure how to fix it. Any suggestions?
The documentation for scipy.interpolate.interpn states that the first argument is a grid of the data you are interpolating over (which is just the integers of the pixel numbers), second argument is data (blur_maps) and third arguments is the interpolation points in the form (npoints, ndims). So you would have to do something like:
import scipy.interpolate
pixelGrid = [np.arange(x) for x in blur_maps.shape] # create grid of pixel numbers as per the docs
interpPoints = np.array([fx.flatten(), fy.flatten(), pyr_level.flatten()])
# interpolate
interpValues = scipy.interpolate.interpn(pixelGrid, blur_maps, interpPoints.T)
# now reshape the output array to get in the original format you wanted
finalValues = np.reshape(interpValues, fx.shape)
hi i use the sample in python of AgglomerativeClustering i try to estimate the performance but it switches the original labels
i try to compare the predicted labels y_hc and the original label y return by make blobs
import scipy.cluster.hierarchy as sch
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import make_blobs
import numpy as np
import matplotlib.pyplot as plt
data,y = make_blobs(n_samples=300, n_features=2, centers=4, cluster_std=2, random_state=50)
plt.figure(2)
# create dendrogram
dendrogram = sch.dendrogram(sch.linkage(data, method='ward'))
plt.title('dendrogram')
# create clusters linkage="average", affinity=metric , linkage = 'ward' affinity = 'euclidean'
hc = AgglomerativeClustering(n_clusters=4, linkage="average", affinity='euclidean')
# save clusters for chart
y_hc = hc.fit_predict(data,y)
plt.figure(3)
# create scatter plot
plt.scatter(data[y==0,0], data[y==0,1], c='red', s=50)
plt.scatter(data[y==1, 0], data[y==1, 1], c='black', s=50)
plt.scatter(data[y==2, 0], data[y==2, 1], c='blue', s=50)
plt.scatter(data[y==3, 0], data[y==3, 1], c='cyan', s=50)
plt.xlim(-15,15)
plt.ylim(-15,15)
plt.scatter(data[y_hc ==0,0], data[y_hc == 0,1], s=10, c='red')
plt.scatter(data[y_hc==1,0], data[y_hc == 1,1], s=10, c='black')
plt.scatter(data[y_hc ==2,0], data[y_hc == 2,1], s=10, c='blue')
plt.scatter(data[y_hc ==3,0], data[y_hc == 3,1], s=10, c='cyan')
for ii in range(4):
print(ii)
i0=y_hc==ii
counts = np.bincount(y[i0])
valCountAtorgLbl = (np.argmax(counts))
accuracy0Tp=100*np.max(counts)/y[y==valCountAtorgLbl].shape[0]
accuracy0Fp = 100 * np.min(counts) / y[y ==valCountAtorgLbl].shape[0]
print([accuracy0Tp,accuracy0Fp])
plt.show()
The clustering does and cannot reproduce the original labels, only the original partitions.
You seem to assume that cluster 1 corresponds to label 1 (in faftz one could be labeled 'iris setosa', and there obviously is no way an unsupervised algorithm will come up with this cluster name...). It usually won't - there probably isn't the same number of clusters and classes there either, and there could be unlabeled noise piintsl You can use the Hungarian algorithm to compute the optimum mapping (or just a greedy matching) to produce a more intuitive color mapping.