Can I add random nodes on edges manually in OSMNX? - networkx

I am new to OSMnx and to NetworkX packages in python.
Let's say I have the following example:
import numpy as np
import osmnx as ox
import geopandas as gpd
import networkx as nx
place_name = 'Fefan'
graph = ox.graph_from_place(place_name, network_type='drive')
graph_proj = ox.project_graph(graph)
nodes_proj= ox.graph_to_gdfs(graph_proj, nodes=True, edges=False)
ox.plot_graph(graph_proj)
As you can see, I only obtain the two nodes for this place. I guess that's how it is in OSM. Is there any way I can manually add random nodes in this graph, especially on the edges of it?
For a broader picture. I need the nodes to calculate some distance matrices between some buildings, which are not shown here.
Best,

You can do this in the following way.
Import nodes and edges as geodataframes.
import numpy as np
import osmnx as ox
import geopandas as gpd
import networkx as nx
place_name = 'Fefan'
graph = ox.graph_from_place(place_name, network_type='drive')
nodes= ox.graph_to_gdfs(graph, nodes=True, edges=False)
edges= ox.graph_to_gdfs(graph, edges=True, nodes=False)
ox.plot_graph(graph)
Create a dictionary for the new nodes. I have just added one new node in this case.
import geopandas as gpd
from shapely.geometry import Point
my_dict = {
'001': {
'y': 7.367210,
'x': 151.838487,
'street_count': 1
}
}
Create a new geodataframe for the new node.
tmp_list = []
for item_key, item_value in my_dict.items() :
tmp_list.append({
'geometry' : Point(item_value['x'], item_value['y']),
'osmid': item_key,
'y' : item_value['y'],
'x' : item_value['x'],
'street_count': item_value ['street_count']
})
my_nodes = gpd.GeoDataFrame(tmp_list)
Append new geodataframe (my_nodes) to the old geodataframe (nodes) to modify the old geodataframe and plot.
nodes = nodes.append(my_nodes, ignore_index = True)
graph2 = ox.graph_from_gdfs(nodes, edges)
ox.plot_graph(graph2)

Related

Migrating from basemap to cartopy - coordinates offset?

I was happy with basemap showing some points of interest I have visited, but since basemap is dead, I am moving over to cartopy.
So far so good, I have transferred all the map features (country borders, etc.) without major problems, but I have noticed my points of interest are shifted a bit to North-East in cartopy.
Minimal example from the old basemap (showing two points of interest on the Germany-Czech border):
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map = Basemap(projection='merc', epsg='3395',
llcrnrlon = 11.5, urcrnrlon = 12.5,
llcrnrlat = 50, urcrnrlat = 50.7,
resolution = 'f')
map.fillcontinents(color='Bisque',lake_color='LightSkyBlue')
map.drawcountries(linewidth=0.2)
lats = [50.3182289, 50.2523744]
lons = [12.1010486, 12.0906336]
x, y = map(lons, lats)
map.plot(x, y, marker = '.', markersize=1, mfc = 'red', mew = 1, mec = 'red', ls = 'None')
plt.savefig('example-basemap.png', dpi=200, bbox_inches='tight', pad_inches=0)
And the same example from cartopy - both points are slightly shifted to North-East as can be verified also e.g. via google maps:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
ax = plt.axes(projection=ccrs.Mercator())
ax.set_extent([11.5, 12.5, 50, 50.7])
ax.add_feature(cfeature.LAND.with_scale('10m'), color='Bisque')
ax.add_feature(cfeature.LAKES.with_scale('10m'), alpha=0.5)
ax.add_feature(cfeature.BORDERS.with_scale('10m'), linewidth=0.2)
lats = [50.3182289, 50.2523744]
lons = [12.1010486, 12.0906336]
ax.plot(lons, lats, transform=ccrs.Geodetic(),
marker = '.', markersize=1, mfc = 'red', mew = 1, mec = 'red', ls = 'None')
plt.savefig('cartopy.png', dpi=200, bbox_inches='tight', pad_inches=0)
Any idea how to get cartopy plot the points on expected coordinates? Tried cartopy 0.18 and 0.20 with the same result.

deepface for generating embedding for multiple faces in single image

i am using deep face library for the face recognition project. in my case, I want to detect multiple faces present in the test image using facenet. when I apply deepface preprocess function i see that it generates only one embedding where as in the given image four faces are present. how can i get respective embeddings for each face?
**my code looks:**
import deepface as DeepFace
from elasticsearch import Elasticsearch
from deepface.basemodels import Facenet
import os
from deepface.commons import functions
model = Facenet.loadModel()
target_size = (160, 160)
embedding_size = 128
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']
target_path = "/home/niveus/PycharmProjects/deepface-elastic-research/deepface/align_img/deep_aku.jpg"
target_img = functions.preprocess_face(target_path, target_size = target_size,detector_backend = backends[3])
target_embedding = model.predict(target_img)[0] #[0]
print(target_embedding.shape)
print("embeddings",target_embedding)
Deepface enforces you to use single faces in an image. But you are still able to process multiple faces.
1- Extract faces with retinaface
#!pip install retina-face
from retinaface import RetinaFace
faces = RetinaFace.extract_faces(img_path = "img.jpg", align = True)
2- Pass extracted faces to deepface
#!pip install deepface
from deepface import DeepFace
embeddings = []
for face in faces:
embedding = DeepFace.represent(img_path = face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)
The trick is to set the enforce detection argument to false because we will pass already detected faces to deepface.
You are also able to use MTCNN instead of RetinaFace. RetinaFace is amazing but it is very slow. Instead, mtcnn comes with high performance and speed.
from mtcnn import MTCNN
from deepface import DeepFace
import cv2
img = cv2.cvtColor(cv2.imread("deepface/tests/dataset/img1.jpg"), cv2.COLOR_BGR2RGB)
detector = MTCNN()
detections = detector.detect_faces(img)
embeddings = []
for detection in detections:
confidence = detection["confidence"]
if confidence > 0.90:
x, y, w, h = detection["box"]
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
embedding = DeepFace.represent(detected_face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)
The both retinaface and mtcnn find facial landmarks such as eyes. In that way, deepface can align faces in the background and it increases the quality of embeddings dramatically. However, opencv is not good at finding landmarks. That's why, if you are going to use opencv, its score might be low because of the alignment.

fill oceans for basemap in 3D

I would like to fill oceans for my basemap in 3D but
ax.add_collection3d(m.drawmapboundary(fill_color='aqua'))
doesn't seem to work because the basemap drawmapboundary method doesn’t return an object supported by add_collection3d but a matplotlib.collections.PatchCollection object. Is there any workaround similar to the one done for land polygons here? Thank you!
Drawing a rectangle (polygon) below the map is one solution. Here is the working code that you may try.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
map = Basemap()
fig = plt.figure()
ax = Axes3D(fig)
ax.azim = 270
ax.elev = 50
ax.dist = 8
ax.add_collection3d(map.drawcoastlines(linewidth=0.20))
ax.add_collection3d(map.drawcountries(linewidth=0.15))
polys = []
for polygon in map.landpolygons:
polys.append(polygon.get_coords())
# This fills polygons with colors
lc = PolyCollection(polys, edgecolor='black', linewidth=0.3, \
facecolor='#BBAAAA', alpha=1.0, closed=False)
lcs = ax.add_collection3d(lc, zs=0) # set zero zs
# Create underlying blue color rectangle
# It's `zs` value is -0.003, so it is plotted below land polygons
bpgon = np.array([[-180., -90],
[-180, 90],
[180, 90],
[180, -90]])
polys2 = []
polys2.append(bpgon)
lc2 = PolyCollection(polys2, edgecolor='none', linewidth=0.1, \
facecolor='#445599', alpha=1.0, closed=False)
lcs2 = ax.add_collection3d(lc2, zs=-0.003) # set negative zs value
plt.show()
The resulting plot:

python basemap close to North pole

I want to plot some data far North using basemap. Unfortunately I cannot get the meridians to be displayed. I think this is because they are not shown north of 80 degrees. Any way to fix this?
Basically, I use this code:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(num=None, figsize=(12, 8) )
m = Basemap(projection='poly', resolution=None,
lon_0=16, lat_0=81.8,
llcrnrlon=9.5, llcrnrlat=80,
urcrnrlon=22, urcrnrlat=82.5)
m.drawparallels(np.arange(80. ,82.5 ,0.5),labels=[True,False,False,False])
m.drawmeridians(np.arange(10.0, 22.0, 2.0),labels=[True,True,False,True])
m.drawmapboundary(fill_color='lightblue')
plt.show()
which produces this figure:
But I want the meridians to be also displayed. How to do this?
Your findings are some of many shortcomings that exist in Basemap. That's the reason why Cartopy was created. For a simple workaround to get your plot done, you can use plot() function to draw the missing meridional curves as follows.
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(num=None, figsize=(12, 8))
m = Basemap(projection='poly', resolution=None,
lon_0=16, lat_0=81.8,
llcrnrlon=9.5, llcrnrlat=80,
urcrnrlon=22, urcrnrlat=82.5)
m.drawparallels(np.arange(80.0, 83.0, 0.5), labels=[True,False,False,False])
# this does not fully work, only labels are rendered, but not lines
m.drawmeridians(np.arange(10.0, 22.0, 2.0), labels=[True,True,False,True])
# a workaround to get meridians plotted
phs = np.arange(80, 83, 0.05)
for ea in np.arange(8.0, 22.0, 2.0):
lds = np.ones(len(phs))*ea
m.plot(lds, phs, latlon=True, color="k", linewidth=0.5)
m.drawmapboundary(fill_color='lightblue')
plt.show()
The resulting plot:

How to specify edge length in Networkx based off of edge weight

import networkx as nx
import numpy as np
import pylab as plt
#Generate graph with 4 nodes
erdo_graph = nx.erdos_renyi_graph(4, 0.7, directed=False)
#Add edge weights
for u,v,d in erdo_graph.edges(data=True):
d['weight'] = np.random.choice(np.arange(1, 7), p=[0.1, 0.05, 0.05, 0.2,])
#If you want to print out the edge weights:
labels = nx.get_edge_attributes(erdo_graph,'weight')
print("Here are the edge weights: ", labels)
#Following "Networkx Spring Layout with different edge values" link that you supplied:
initialpos = {1:(0,0), 2:(0,3), 3:(0,-1), 4:(5,5)}
pos = nx.spring_layout(erdo_graph, weight='weight', pos=initialpos)
nx.draw_networkx(erdo_graph, pos)
plt.show()
Whenever I try to position the nodes based off of a layout, I expect nodes that are connected by a lower edge weight to be closer to each other, but that doesn't seem to be the case.