Given an array of pixels, how to I set up a Gtk3 image in Python - gtk3

I have written a simple GTK 3 application that reads an image, displays it, applies a threshold to the pixel intensity and tries to draw the result. So I have a numpy array with the pixel values. I do not know how to load the GtkImage from an array of pixels.
It seems the the gtk.gdk.Drawable class might be the answer. I cannot import GdkDrawable because the introspection typelib is not found. So I am stuck - how to I get use a numpy array to set the image of a GtkImage object in Python?
The only way I know to do it is to use Pillow to write the array to a png file and then set the GtkImage widget from the file. Since I want to update as a slider is adjusted, that is too slow.
import gi
from PIL import Image
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, Gdk, GdkPixbuf
#Load the gtk image widget from a file
#Get pixel array
pb = self.image_widget.get_pixbuf()
h = pb.get_height()
w = pb.get_width()
pixels = pb.get_pixels()
pixels = np.frombuffer(pixels, dtype=np.uint8)
pixels = pixels.reshape((h,w,3))
# Apply the threshold
self.pixels = pixels
self.pixels = (self.pixel>thresh)*255
im = Image.fromarray(np.uint8(self.pixels))
im.save('temp.png')
self.image_widget.set_from_file('temp.png')
self.image_widget.show()

Related

how to enforce deepface to generate embeddings for multiple faces in single image( assuming there are more than one face in given image)

I tried adding the following code, but I end up with the error from represent function saying :
TypeError: stat: path should be string, bytes, os.PathLike or integer, not dict
I understand that the first argument to represent function should be image path but I am supplying the output of MTCNN detection, which is metadata. I am unable to figure out how I can enforce it to get multiple embedding when there is more than one face in a given image
from mtcnn import MTCNN
import cv2
#pass1
img = cv2.cvtColor(cv2.imread("all_faces.jpeg"), cv2.COLOR_BGR2RGB)
detector = MTCNN()
faces=detector.detect_faces(img)
#pass2
embeddings = []
for face in faces:
embedding = DeepFace.represent(face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)
Your face object stores the bounding box and confidence scores.
face: {'box': [412, 161, 593, 853], 'confidence': 0.9996218681335449, 'keypoints': {'left_eye': (579, 518), 'right_eye': (861, 518), 'nose': (735, 681), 'mouth_left': (575, 790), 'mouth_right': (883, 773)}}
You need to extract the faces one by one with the bounding box information. To sum up, you will find the detected face with face["box"] and then pass the detected face to deepface library as shown below.
from mtcnn import MTCNN
from deepface import DeepFace
import cv2
#pass1
img = cv2.cvtColor(cv2.imread("deepface/tests/dataset/img1.jpg"), cv2.COLOR_BGR2RGB)
detector = MTCNN()
faces=detector.detect_faces(img)
#pass2
embeddings = []
for face in faces:
x, y, w, h = face["box"]
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
embedding = DeepFace.represent(detected_face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)

deepface for generating embedding for multiple faces in single image

i am using deep face library for the face recognition project. in my case, I want to detect multiple faces present in the test image using facenet. when I apply deepface preprocess function i see that it generates only one embedding where as in the given image four faces are present. how can i get respective embeddings for each face?
**my code looks:**
import deepface as DeepFace
from elasticsearch import Elasticsearch
from deepface.basemodels import Facenet
import os
from deepface.commons import functions
model = Facenet.loadModel()
target_size = (160, 160)
embedding_size = 128
backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']
target_path = "/home/niveus/PycharmProjects/deepface-elastic-research/deepface/align_img/deep_aku.jpg"
target_img = functions.preprocess_face(target_path, target_size = target_size,detector_backend = backends[3])
target_embedding = model.predict(target_img)[0] #[0]
print(target_embedding.shape)
print("embeddings",target_embedding)
Deepface enforces you to use single faces in an image. But you are still able to process multiple faces.
1- Extract faces with retinaface
#!pip install retina-face
from retinaface import RetinaFace
faces = RetinaFace.extract_faces(img_path = "img.jpg", align = True)
2- Pass extracted faces to deepface
#!pip install deepface
from deepface import DeepFace
embeddings = []
for face in faces:
embedding = DeepFace.represent(img_path = face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)
The trick is to set the enforce detection argument to false because we will pass already detected faces to deepface.
You are also able to use MTCNN instead of RetinaFace. RetinaFace is amazing but it is very slow. Instead, mtcnn comes with high performance and speed.
from mtcnn import MTCNN
from deepface import DeepFace
import cv2
img = cv2.cvtColor(cv2.imread("deepface/tests/dataset/img1.jpg"), cv2.COLOR_BGR2RGB)
detector = MTCNN()
detections = detector.detect_faces(img)
embeddings = []
for detection in detections:
confidence = detection["confidence"]
if confidence > 0.90:
x, y, w, h = detection["box"]
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
embedding = DeepFace.represent(detected_face, model_name = 'Facenet', enforce_detection = False)
embeddings.append(embedding)
The both retinaface and mtcnn find facial landmarks such as eyes. In that way, deepface can align faces in the background and it increases the quality of embeddings dramatically. However, opencv is not good at finding landmarks. That's why, if you are going to use opencv, its score might be low because of the alignment.

Discord.py Image Editing with Python Imaging Library only works for some pictures?

I've tried an image-editing-effect which should recolor a picture with little black dots, however it only works for certain images and I honestly don't know why. Any ideas?
#url = member.avatar_url
#print(url)
#response = requests.get(url=url, stream=True).raw
#imag = Image.open(response)
imag = Image.open("unknown.png")
#out = Image.new('I', imag.size)
i = 0
width, height = imag.size
for x in range(width):
i+=1
for y in range(height):
if i ==5:
# changes every 5th pixel to a certain brightness value
r,g,b,a = imag.getpixel((x,y))
print(imag.getpixel((x,y)))
brightness = int(sum([r,g,b])/3)
print(brightness)
imag.putpixel((x, y), (brightness,brightness,brightness,255))
i= 0
else:
i += 1
imag.putpixel((x,y),(255,255,255,255))
imag.save("test.png")
The comments are what I would've used if my tests had worked. Using local pngs also don't work all the time.
Your image that doesn't work doesn't have an alpha channel but your code assumes it does. Try forcing in an alpha channel on opening like this:
imag = Image.open("unknown.png").convert('RGBA')
See also What's the difference between a "P" and "L" mode image in PIL?
A couple of other ideas too:
looping over images with Python for loops is slow and inefficient - in general, try to find a vectorised Numpy alternative
you have an alpha channel but set it to 255 (i.e. opaque) everywhere, so in reality, you may as well not have it and save roughly 1/4 of the file size
your output image is RGB with all 3 components set identically - that is really a greyscale image, so you could create it as such and your output file will be 1/3 the size
So, here is an alternative rendition:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image and ensure neither palette nor alpha
im = Image.open('paddington.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Calculate greyscale image as mean of R, G and B channels
grey = np.mean(na, axis=-1).astype(np.uint8)
# Make white output image
out = np.full(grey.shape, 255, dtype=np.uint8)
# Copy across selected pixels
out[1::6, 1::4] = grey[1::6, 1::4]
out[3::6, 0::4] = grey[3::6, 0::4]
out[5::6, 2::4] = grey[5::6, 2::4]
# Revert to PIL Image
Image.fromarray(out).save('result.png')
That transforms this:
into this:
If you accept calculating the greyscale with the normal method, rather than averaging R, G and B, you could change to this:
im = Image.open('paddington.png').convert('L')
and remove the line that does the averaging:
grey = np.mean(na, axis=-1).astype(np.uint8)

Transferring basemap - cartopy

I am using basemap on Python 2.7 but would like to go for Python 3, and therefor, moving to cartopy. It would be fantastic if you would give me some advises how to change my code from basemap to cartopy:
This is the basemap code:
from mpl_toolkits.basemap import Basemap
# plot map without continents and coastlines
m = Basemap(projection='kav7',lon_0=0)
# draw map boundary, transparent
m.drawmapboundary()
m.drawcoastlines()
# draw paralells and medians, no labels
if (TheLatInfo[1] == len(TheLatList)) & (TheLonInfo[1] == len(TheLonList)):
m.drawparallels(np.arange(-90,90.,30.))
m.drawmeridians(np.arange(-180,180.,60.))
grids = m.pcolor(LngArrLons,LngArrLats,MSKTheCandData,cmap=cmap,norm=norm,latlon='TRUE')
This is the cartopy example I found and have changed some bits:
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import cartopy.feature as cpf
ax = plt.axes(projection=ccrs.Robinson())
ax.coastlines()
ax.set_boundary
ax.gridlines(draw_labels=False)
plt.show()
I am not sure about how to set the gridlines in the exact positions and how to color them black instead of grey. Furthermore, I wonder how to insert/overlay my actual map with data then. Is "ax.pcolor" well enough supported by cartopy?
Thank you!
To color your gridlines black, you can use a color= keyword:
ax.gridlines(color='black')
To specify lat/lon gridline placement, you really only need a few extra lines, if you don't care about labels:
import matplotlib.ticker as mticker
gl = ax.gridlines(color='black')
gl.xlocator = mticker.FixedLocator([-180, -90, 0, 90, 180])
gl.ylocator = mticker.FixedLocator([-90,-45,0,45,90])
(As of writing this, Robinson projections don't support gridline labels.)
To overlay your data on the map,pcolor should work, but it's famously slow. I would recommend pcolormesh, though you can substitute one for another in this syntax:
ax.pcolormesh(lon_values, lat_values, data)
Note that if your data come on a different projection than the map projection you're plotting (typically true), you need to specify the data's projection in the plotting syntax using the transform= keyword. That tells cartopy to transform your data from their original projection to that of the map. Plate Carrée is the same as cylindrical equidistant (typical for climate model output, for example):
ax.pcolormesh(lon_values, lat_values, data, transform=ccrs.PlateCarree())

cartopy: map overlay on NOAA APT image

I am working on a project trying to decode NOAA APT images, so far I reached the stage where I can get the images from raw IQ recordings from RTLSDRs. Here is one of the decoded images,
Decoded NOAA APT image this image will be used as input for the code (seen as m3.png here on)
Now I am working on overlaying map boundaries on the image (Note: Only on the left half part of the above image)
We know, the time at which the image was captured and the satellite info: position, direction etc. So, I used the position of the satellite to get the center of map projection and and direction of satellite to rotate the image appropriately.
First I tried in Basemap, here is the code
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import numpy as np
from scipy import ndimage
im = plt.imread('m3.png')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
rotated_img = ndimage.rotate(im, rot) # rotate image
w = rotated_img.shape[1]*4000*0.81 # in meters, spec says 4km per pixel, but I had to make it 81% less to get better image
h = rotated_img.shape[0]*4000*0.81 # in meters, spec says 4km per pixel, but I had to make it 81% less to get better image
m = Basemap(projection='cass',lon_0 = center[1],lat_0 = center[0],width = w,height = h, resolution = "i")
m.drawcoastlines(color='yellow')
m.drawcountries(color='yellow')
im = plt.imshow(rotated_img, cmap='gray', extent=(*plt.xlim(), *plt.ylim()))
plt.show()
I got this image as a result, which seems pretty good
I wanted to move the code to Cartopy as it is easier to install and is actively being developed. I was unable to find a similar way to set boundaries i.e. width and height in meters. So, I modified most similar example. I found a function which would add meters to longs and lats and used that to set the boundaries.
Here is the code in Cartopy,
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
from scipy import ndimage
import cartopy.feature
im = plt.imread('m3.png')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
def add_m(center, dx, dy):
# source: https://stackoverflow.com/questions/7477003/calculating-new-longitude-latitude-from-old-n-meters
new_latitude = center[0] + (dy / 6371000.0) * (180 / np.pi)
new_longitude = center[1] + (dx / 6371000.0) * (180 / np.pi) / np.cos(center[0] * np.pi/180)
return [new_latitude, new_longitude]
fig = plt.figure()
img = ndimage.rotate(im, rot)
dx = img.shape[0]*4000/2*0.81 # in meters
dy = img.shape[1]*4000/2*0.81 # in meters
leftbot = add_m(center, -1*dx, -1*dy)
righttop = add_m(center, dx, dy)
img_extent = (leftbot[1], righttop[1], leftbot[0], righttop[0])
ax = plt.axes(projection=ccrs.PlateCarree())
ax.imshow(img, origin='upper', cmap='gray', extent=img_extent, transform=ccrs.PlateCarree())
ax.coastlines(resolution='50m', color='yellow', linewidth=1)
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', edgecolor='yellow')
plt.show()
Here is the result from Cartopy, it is not as good as the result from Basemap.
I have following questions:
I found it impossible to rotate the map instead of the image, in
both basemap and cartopy. Hence I resorted to rotating the image, is
there a way to rotate the map?
How do I improve the output of cartopy? I think it is the way in
which I am calculating the extent a problem. Is there a way I can
provide meters to set the boundaries of the image?
Is there a better way to do what I am trying to do? any projection that are specific to these kind of applications?
I am adjusting the scale (the part where I decide the number of kms per pixel) manually, is there a way to do this based
on
satellite's altitude?
Any sort of input would be highly appreciated. Thank you so much for your time!
If you are interested you can find the project here.
As far as I can see, there is no ability for the underlying Proj.4 to define satellite projections with rotated perspectives (happy to be shown otherwise - I'm no expert!) (note: perhaps via ob_tran?). This is the main reason you can't do this in "native" coordinates/orientation with Basemap or Cartopy.
This question really comes down to a georeferencing problem, to which I couldn't find enough information in places like https://www.cder.dz/download/Art7-1_1.pdf.
My solution is entirely a fudge, but does get you quite close to referencing this image. I double the fudge factors are actually universal, which is a bit of an issue if you want to write general-purpose code.
Some of the fudges I had to make (trial-and-error):
adjust the satellite bearing by 3.2 degrees
adjust where the image centre is by moving it along the satellite trajectory by 10km
adjust where the image centre is by moving it perpendicularly along the satellite trajectory by 10km
scale the x and y pixel sizes by 0.62 and 0.65 respectively
use the "near-sided perspective" projection at an unrealistic satellite_height
The result is what appears to be a relatively well registered image, but as I say, seems unlikely to be generally applicable to all images received:
The code to produce this image (fairly involved, but complete):
import urllib.request
urllib.request.urlretrieve('https://i.stack.imgur.com/UBIuA.jpg', 'm3.jpg')
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
from scipy import ndimage
import cartopy.feature
im = plt.imread('m3.jpg')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
import numpy as np
from cartopy.geodesic import Geodesic
import matplotlib.transforms as mtransforms
from matplotlib.axes import Axes
tweaked_rot = rot - 3.2
geod = Geodesic()
# Move the center along the trajectory of the satellite by 10KM
f = np.array(
geod.direct([center[1], center[0]],
180 - tweaked_rot,
10000))
tweaked_center = f[0, 0], f[0, 1]
# Move the satellite perpendicular from its proposed trajectory by 15KM
f = np.array(
geod.direct([tweaked_center[0], tweaked_center[1]],
180 - tweaked_rot + 90,
10000))
tweaked_center = f[0, 0], f[0, 1]
data_crs = ccrs.NearsidePerspective(
central_latitude=tweaked_center[1],
central_longitude=tweaked_center[0],
)
# Compute the center in data_crs coordinates.
center_lon_lat_ortho = data_crs.transform_point(
tweaked_center[0], tweaked_center[1], ccrs.Geodetic())
# Define the affine rotation in terms of matplotlib transforms.
rotation = mtransforms.Affine2D().rotate_deg_around(
center_lon_lat_ortho[0], center_lon_lat_ortho[1], tweaked_rot)
# Some fudge factors. Sorry - there are entirely application specific,
# perhaps some reading of https://www.cder.dz/download/Art7-1_1.pdf
# would enlighten these... :(
ff_x, ff_y = 0.62, 0.65
ff_x = ff_y = 0.81
x_extent = im.shape[1]*4000/2 * ff_x
y_extent = im.shape[0]*4000/2 * ff_y
img_extent = [-x_extent, x_extent, -y_extent, y_extent]
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(projection=data_crs)
ax.margins(0.02)
with ax.hold_limits():
ax.stock_img()
# Uing matplotlib's image transforms if the projection is the
# same as the map, otherwise we need to fall back to cartopy's
# (slower) image resampling algorithm
if ax.projection == data_crs:
transform = rotation + ax.transData
else:
transform = rotation + data_crs._as_mpl_transform(ax)
# Use the original Axes method rather than cartopy's GeoAxes.imshow.
mimg = Axes.imshow(ax, im, origin='upper', cmap='gray',
extent=img_extent, transform=transform)
lower_left = rotation.frozen().transform_point([-x_extent, -y_extent])
lower_right = rotation.frozen().transform_point([x_extent, -y_extent])
upper_left = rotation.frozen().transform_point([-x_extent, y_extent])
upper_right = rotation.frozen().transform_point([x_extent, y_extent])
plt.plot(lower_left[0], lower_left[1],
upper_left[0], upper_left[1],
upper_right[0], upper_right[1],
lower_right[0], lower_right[1],
marker='x', color='black',
transform=data_crs)
ax.coastlines(resolution='10m', color='yellow', linewidth=1)
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', edgecolor='yellow')
sat_pos = np.array(geod.direct(tweaked_center, 180 - tweaked_rot,
np.linspace(-x_extent*2, x_extent*2, 50)))
with ax.hold_limits():
plt.plot(sat_pos[:, 0], sat_pos[:, 1], transform=ccrs.Geodetic(),
label='Satellite path')
plt.plot(tweaked_center, 'ob')
plt.legend()
As you can probably tell, I got a bit carried away with this question. It is a super interesting problem, but not really a cartopy/Basemap one per-say.
Hope that helps!