Networkx OSMNX and Folium plotting different coloured edges [duplicate] - networkx

This question already has an answer here:
OSMnx: plot a network on an interactive web map with different colours per infrastructure
(1 answer)
Closed 1 year ago.
I have been trying to overlay different osmnx plots onto folium but I have been unable to. I want to achieve the same thing as shown in the image below with a simple osmnx plot but instead on folium with the red edges being red and the other edges being a different colour.
fig, ax = ox.plot_graph_routes(G,max_response_edges, bgcolor='k', node_size=30, node_color='#999999', node_edgecolor='none', node_zorder=2,
edge_color='#555555', edge_linewidth=1.5, edge_alpha=1,figsize = (20,20))
I have tried to use the following code:
H = G.copy()
H1 = G.copy()
H.remove_edges_from(G.edges - set(map(lambda x: tuple(x)+(0,),max_response_edges)))
m = ox.folium.plot_graph_folium(H,tiles='openstreetmap',popup_attribute='name',opacity = 1,color = 'red',weight= 10)
H1.remove_edges_from(set(map(lambda x: tuple(x)+(0,),max_response_edges)))
n = ox.folium.plot_graph_folium(H1,tiles='openstreetmap',popup_attribute='name',opacity = 1,color = 'blue',weight= 10)
n.add_to(m)
m
I tried making use of m.add_child() or m.add_to() but none have proved useful. A similar stack overflow question was posted here however this did not work. Can folium overlays be done?

Got it working through the following code
H = G.copy()
H1 = G.copy()
H.remove_edges_from(G.edges - set(map(lambda x: tuple(x)+(0,),max_response_edges)))
m = ox.folium.plot_graph_folium(H,tiles='openstreetmap',popup_attribute='name',opacity = 1,color = 'red',weight= 10)
H1.remove_edges_from(set(map(lambda x: tuple(x)+(0,),max_response_edges)))
m = ox.folium.plot_graph_folium(H1, graph_map = m,tiles='openstreetmap',popup_attribute='name',opacity = 1,color = 'blue',weight= 10)
m
turns out I was missing the graph_map=m parameter

Related

Interpolation with radial basis function in julia

I have found few radial basis functions like BasisExpansionFunction, Surrogates.jl, ScatteredInterpolation in Julia.
However, I am unable to replicate the results from python's scipy.interpolate.rbf() function.
Python Example
from scipy.interpolate import Rbf
import numpy as np
xs = np.arange(10)
ys = xs**2 + np.sin(xs) + 1
interp_func = Rbf(xs, ys) # By default RbF uses Multiquadratic function
newarr = interp_func(np.arange(2.1, 3, 0.1))
print(newarr)
What is correct approach to replicate the above example in Julia?
The first tutorial in Surrogates.jl shows how to make and interpolate a radial basis function.
using Surrogates
using LinearAlgebra
f = x -> x[1]*x[2]
lb = [1.0,2.0]
ub = [10.0,8.5]
x = sample(50,lb,ub,SobolSample())
y = f.(x)
my_radial_basis = RadialBasis(x,y,lb,ub)
#I want an approximation at (1.0,1.4)
approx = my_radial_basis((1.0,1.4))

How can we evaluate a Gaussian in an intensity of an Image? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
You can refer to section 4.3.1 in this article if you want.
If pI is any pixel/intensity on this image, and dS is the (rho, theta) of that line in the Hough Space, what is the meaning of the following statement?
Is the following a correct implementation?
function val = gaussC(pI, sigma, dS)
x = pI(1);
y = pI(2);
rho = dS(1);
theta = dS(2);
exponent = ((x-rho).^2 + (y-theta).^2)./(2*sigma);
val = (exp(-exponent));
end
EDIT:
My second proposal,
I = gray_imread('Scratch1.png');
dimesnsion = 5;
sigma = 1;
pI = [22, 114];
dS = [-108, -80];
J = get_matrix_from_image(I, pI, dimension);
var = normpdf(J(:), dS(2), sigma);
get_matrix_from_image.m
function mat = get_matrix_from_image(input_image, ctr_point, dimension)
[height, width] = size(input_image);
col_count = width;
row_count = height;
xxx = col_count;
yyy = row_count;
if(ctr_point(1) < 1 && ctr_point(2) < 1)
mat = zeros(dimension, dimension);
else
x = ctr_point(1);
y = ctr_point(2);
start_x = x - floor(dimension/2);
end_x = start_x + dimension - 1;
start_y = y - floor(dimension/2);
end_y = start_y + dimension - 1;
if(start_x > xxx || end_x>xxx || start_y > yyy || end_y>yyy || ...
start_x < 1 || end_x<1 || start_y <1 || end_y<1)
mat = zeros(dimension, dimension);
else
mat = input_image(start_x:end_x, start_y:end_y);
end
end
end
Not quite. Basically you are manually coding formula (9) from here. Then:
...
exponent = ((x-rho).^2 + (y-theta).^2)./(2*sigma^2); % sigma is also squared
val = exp(-exponent); % superfluous bracket removed
val = val./(2*pi*sigma^2); % you also forgot the denominator part
end
Of course you could write the whole thing a bit more efficient. But unless you actually want to use this formula on a lot of data I would keep it like this (it's very readable).
If you value performance, just use the built in function:
val = normpdf(pI,dS,sigma)
For new readers of this question: The OP reopened this questions after editing it heavily, completely changing the nature of the question. Therefore this answer now seems a bit off.
Your code is the incorrect implementation of the PDF of the normal distribution. The PDF of the normal distribution is:
IMO If pI is defined by x and y, i.e. pI(x,y), and dS is defined by rho and theta, i.e. dS(rho,theta) then you cannot simply subtract rho from x and theta from y . You have to convert one of them to the other. In my code, I have converted dS(rho,theta) to dS(x,y) and then used it as μ in the formula of PDF.
Furthermore, I think pI would be a matrix of 5 rows and 2 columns (5 pixels with x and y values) saying this on the basis of the following figure which is Figure 6 of the linked research:
Now coming to the statement,
g(pi , ds) is a Gaussian function, evaluated
in pi, with a peak in correspondence with the detected
scratch direction ds and a constant standard deviation.
IMO The author/s of the paper suggest/s to take 5 pixels, calculate PDF and find where the peak is.
Based on my understanding, its implementation should be:
function val = gaussC(pI,dS,sigma)
x = pI(:,1); %x values of all pixels
y = pI(:,2); %y values of all pixels
rho = dS(1);
theta = dS(2);
%Converting polar coordinates to rectangular coordinates to get mean value
%in x and y direction
exponent = [(x-rho*cos(theta)).^2 + (y-rho*sin(theta)).^2]./(2*sigma^2);
val = exp(-exponent)./(sigma*sqrt(2*pi));
end
After calculating the PDF, find the peak value.

Plot portfolio composition map in Julia (or Matlab)

I am optimizing portfolio of N stocks over M levels of expected return. So after doing this I get the time series of weights (i.e. a N x M matrix where where each row is a combination of stock weights for a particular level of expected return). Weights add up to 1.
Now I want to plot something called portfolio composition map (right plot on the picture), which is a plot of these stock weights over all levels of expected return, each with a distinct color and length (at every level of return) is proportional to it's weight.
My questions is how to do this in Julia (or MATLAB)?
I came across this and the accepted solution seemed so complex. Here's how I would do it:
using Plots
#userplot PortfolioComposition
#recipe function f(pc::PortfolioComposition)
weights, returns = pc.args
weights = cumsum(weights,dims=2)
seriestype := :shape
for c=1:size(weights,2)
sx = vcat(weights[:,c], c==1 ? zeros(length(returns)) : reverse(weights[:,c-1]))
sy = vcat(returns, reverse(returns))
#series Shape(sx, sy)
end
end
# fake data
tickers = ["IBM", "Google", "Apple", "Intel"]
N = 10
D = length(tickers)
weights = rand(N,D)
weights ./= sum(weights, dims=2)
returns = sort!((1:N) + D*randn(N))
# plot it
portfoliocomposition(weights, returns, labels = tickers)
matplotlib has a pretty powerful polygon plotting capability, e.g. this link on plotting filled polygons:
ploting filled polygons in python
You can use this from Julia via the excellent PyPlot.jl package.
Note that the syntax for certain things changes; see the PyPlot.jl README and e.g. this set of examples.
You "just" need to calculate the coordinates from your matrix and build up a set of polygons to plot the portfolio composition graph. It would be nice to see the code if you get this working!
So I was able to draw it, and here's my code:
using PyPlot
using PyCall
#pyimport matplotlib.patches as patch
N = 10
D = 4
weights = Array(Float64, N,D)
for i in 1:N
w = rand(D)
w = w/sum(w)
weights[i,:] = w
end
weights = [zeros(Float64, N) weights]
weights = cumsum(weights,2)
returns = sort!([linspace(1,N, N);] + D*randn(N))
##########
# Plot #
##########
polygons = Array(PyObject, 4)
colors = ["red","blue","green","cyan"]
labels = ["IBM", "Google", "Apple", "Intel"]
fig, ax = subplots()
fig[:set_size_inches](5, 7)
title("Problem 2.5 part 2")
xlabel("Weights")
ylabel("Return (%)")
ax[:set_autoscale_on](false)
ax[:axis]([0,1,minimum(returns),maximum(returns)])
for i in 1:(size(weights,2)-1)
xy=[weights[:,i] returns;
reverse(weights[:,(i+1)]) reverse(returns)]
polygons[i] = matplotlib[:patches][:Polygon](xy, true, color=colors[i], label = labels[i])
ax[:add_artist](polygons[i])
end
legend(polygons, labels, bbox_to_anchor=(1.02, 1), loc=2, borderaxespad=0)
show()
# savefig("CompositionMap.png",bbox_inches="tight")
Can't say that this is the best way, to do this, but at least it is working.

Using a clear portion of a picture to recreate a PSF

I'm trying to unblur the blurred segments of the following picture.
the original PSF was not given, so I proceeded to analyze the blurred part and see whether there was a word I could roughly make out. I found out that I could make out "of" in the blurred section. I cropped out both the the blurred "of" and its counterpart in the clear section, as seen below.
I then thought through lectures in FFT that you divide the blurred (frequency domain) with a particular blurring function (frequency domain) to recreate the original image.
I thought that if I could do Unblurred (frequency domain) \ Blurred(frequency domain), the original PSF could be retrieved. Please advise on how I could do this.
Below is my code:
img = im2double(imread('C:\Users\adhil\Desktop\matlab pics\image1.JPG'));
Blurred = imcrop(img,[205 541 13 12]);
Unblurred = imcrop(img,[39 140 13 12]);
UB = fftshift(Unblurred);
UB = fft2(UB);
UB = ifftshift(UB);
F_1a = zeros(size(B));
for idx = 1 : size(Blurred, 3)
B = fftshift(Blurred(:,:,idx));
B = fft2(B);
B = ifftshift(B);
UBa = UB(:,:,idx);
tmp = UBa ./ B;
tmp = ifftshift(tmp);
tmp = ifft2(tmp);
tmp = fftshift(tmp);
[J, P] = deconvblind(Blurred,tmp);
end
subplot(1,3,1);imshow(Blurred);title('Blurred');
subplot(1,3,2);imshow(Unblurred);title('Original Unblurred');
subplot(1,3,3);imshow(J);title('Attempt at unblurring');
This code, however, does not work, and I'm getting the following error:
Error using deconvblind
Expected input number 2, INITPSF, to be real.
Error in deconvblind>parse_inputs (line 258)
validateattributes(P{1},{'uint8' 'uint16' 'double' 'int16' 'single'},...
Error in deconvblind (line 122)
[J,P,NUMIT,DAMPAR,READOUT,WEIGHT,sizeI,classI,sizePSF,FunFcn,FunArg] = ...
Error in test2 (line 20)
[J, P] = deconvblind(Blurred,tmp);
Is this a good way to recreate the original PSF?
I'm not an expert in this area, but I have played around with deconvolution a little bit and have written a program to compute the point spread function when given a clear image and a blurred image. Once I got the psf function using this program, I verified that it was correct by using it to deconvolve the blurry image and it worked fine. The code is below. I know this post is extremely old, but hopefully it will still be of use to someone.
import numpy as np
import matplotlib.pyplot as plt
import cv2
def deconvolve(normal, blur):
blur_fft = np.fft.rfft2(blur)
normal_fft = np.fft.rfft2(normal)
return np.fft.irfft2(blur_fft/(normal_fft))
img = cv2.imread('Blurred_Image.jpg')
blur = img[:,:,0]
img2 = cv2.imread('Original_Image.jpg')
normal = img2[:,:,0]
psf_real = deconvolve(normal, blur)
fig = plt.figure(figsize=(10,4))
ax1 = plt.subplot(131)
ax1.imshow(blur)
ax2 = plt.subplot(132)
ax2.imshow(normal)
ax3 = plt.subplot(133)
ax3.imshow(psf_real)
plt.gray()
plt.show()

Function definition context in MATLAB [duplicate]

This question already has answers here:
In MATLAB, can I have a script and a function definition in the same file?
(7 answers)
Closed 8 years ago.
I have written a MATLAB program to render a 'Delta E' or color difference image for 2 given images. However, when I run the program, I receive this error:
Error: File: deltaE.m Line: 6 Column: 1
Function definitions are not permitted in this context.
Here is the program:
imageOriginal = imread('1.jpg');
imageModified = imread('2.jpg');
function [imageOut] = deltaE(imageOriginal, imageModified)
[imageHeight imageWidth imageDepth] = size(imageOriginal);
% Convert image from RGB colorspace to lab color space.
cform = makecform('srgb2lab');
labOriginal = applycform(im2double(imageOriginal),cform);
labModified = applycform(im2double(imageModified),cform);
% Extract out the color bands from the original image
% into 3 separate 2D arrays, one for each color component.
L_original = labOriginal(:, :, 1);
a_original = labOriginal(:, :, 2);
b_original = labOriginal(:, :, 3);
L_modified = labModified(:,:,1);
a_modified = labModified(:,:,2);
b_modified = labModified(:,:,3);
% Create the delta images: delta L, delta A, and delta B.
delta_L = L_original - L_modified;
delta_a = a_original - a_modified;
delta_b = b_original - b_modified;
% This is an image that represents the color difference.
% Delta E is the square root of the sum of the squares of the delta images.
delta_E = sqrt(delta_L .^ 2 + delta_a .^ 2 + delta_b .^ 2);
imageOut = delta_E;
end
I might have made a beginner error, since I'm just 17 and I'm starting out with MATLAB. It'd be great if you could tell me what I'm doing wrong.
You can't define a function within a script. You need to define functions on a separate file, or turn the script into a (main) function, and then your other functions will be subfunctions of that. See also here.
EDIT: From Matlab R2016b you can define local functions withint a script; see here.