Multiple images in same view - vispy

Is there a way to show multiple images on the same view with different offsets?
The _build_vertex_data in vispy/visuals/image.py doesn't seem to take any offsets.
For example, I would like to have 3 images side by side, allowing me to zoom in and out of them as a group.

You can achieve this by applying a transformation to the visual.
from vispy import scene
from vispy.visuals.transforms import STTransform
...
image1 = scene.visuals.Image(...)
image2 = scene.visuals.Image(...)
# sets y-axis offset to 42
image2.transform = STTransform(translate=[42])
# = STTransform(translate=[42, 0, 0, 0],
# scale=[0, 0, 0, 0])
...

Related

Altair: merge multiple identical legends when using resolve_scale to merge color and shape properties

Following a frequent issue in Altair:
merging legends 1
merging legends 2
combining color and shape
I want to plot several point series with line plots and point marks visualized both with different colors, shapes, and stroke dashes:
This works as expected when using resolve_scale
x = np.arange(0, 5, 0.1)
mask = np.ones_like(x)
mask[::2] = 0
df = pd.DataFrame({
"x": x,
"y": np.sin(x)*mask + np.cos(x)*(1-mask),
"y2": np.sin(2*x)*mask + np.cos(2*x)*(1-mask) ,
"col": mask
})
base= alt.Chart(df).mark_line(point=True, size=1).encode(
alt.X("x:Q"),
color = alt.Color("col:N"),
shape = alt.Shape("col:N"),
strokeDash = alt.StrokeDash("col:N")
).resolve_scale(color="independent", shape="independent", strokeDash="independent")
base.encode(alt.Y("y:Q"))
But when concatenated with other charts with a different y-value multiple identical legends appear:
base.encode(alt.Y("y:Q")) | base.encode(alt.Y("y2:Q"))
I understand this is the purpose of "resolve_scale", would really appreciate a workaround.
not using the resolve_scale method or using it on the concatenated chart would get me a legend with every visualized property (color, shape, etc) set apart.
You have set the color, shape, and strokeDash to one thing: "col:N". If you want them to be independent, then define them as different things.
base= alt.Chart(df).mark_line(point=True, size=1).encode(
alt.X("x:Q"),
color = alt.Color("col:N"),
shape = alt.Shape("col:N"),
strokeDash = alt.StrokeDash("col:N")
)
h = base.encode(alt.Y("y:Q"), color=alt.value('red')) | base.encode(alt.Y("y2:Q"), color=alt.value('blue')).resolve_scale(color="independent", shape="independent", strokeDash="independent")
as for a workaround, you could go into the h.hconcat[0].encoding and h.hconcat[1].encoding and change the map to be whatever you want for vega-lite to read. At that point I'd just use a different library.
Hopefully this helps.

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

How to scale SCNNodes to fit in a box?

I have multiple collada files with objects (humans) of various sizes, created from different 3D program sources. I desire to scale the objects so they fit inside frame or box. From my reading, I cant using the bounding box to scale the node, so what feature do you utilize to scale the nodes, relative to each other?
// humanNode = {...get node, which is some unknown size }
let (minBound, maxBound) = humanNode.boundingBox
let blockNode = SCNNode(geometry: SCNBox(width: 10, height: 10, length: 10, chamferRadius: 0))
// calculate scale factor so it fits inside of box without having known its size before hand.
s = { ...some method to calculate the scale to fit the humanNode into the box }
humanNode.scale = SCNVector3Make(s, s, s)
How get its size relative to the literal box I want to put it in and scale it?
Is it possible to draw the node off screen to measure its size?

How to scale two game objects with different scale sizes proportionally?

I have two game objects, one for main canvas and other for editor (a dummy version).
The scale for main go is 1, 1, 1 and other one (dummy) is .3, .3, .3. What I want to do is scale the main gameobject proportionally based on the scale percent that the user sets to dummy go, how I can do this?
DummyGameObject.transform.localScale = CanvasGameObject.transform.localScale * 0.33f; //or
DummyGameObject.transform.localScale = CanvasGameObject.transform.localScale / 3;

skewing a UIImageView using CGAffineTransform

I am trying to skew a rectangle so the two vertical sides are slanted but parallel and the top and bottom are horizontal.
I am trying to use CGAffineTransform and have found this code but I am not figuring out what to put in the various parts.
imageView.layer.somethingMagic.imageRightTop = (CGPoint){ 230, 30 };
imageView.layer.somethingMagic.imageRightBottom = (CGPoint){ 300, 150 };
#define CGAffineTransformDistort(t, x, y) (CGAffineTransformConcat(t, CGAffineTransformMake(1, y, x, 1, 0, 0)))
#define CGAffineTransformMakeDistort(x, y) (CGAffineTransformDistort(CGAffineTransformIdentity, x, y))
although this is said to be easy I don't know what to put in the different places.
I assume image view would be my image that I want to change however what goes into somethingMagic. and imageRightTop and imageRightBottom.
Also how do I define t.
If there is a more thorough explanation I would appreciate it since in most cases I found only this as the explanation of what to do to skew a rectangle.
Thanks
Let's assume you have a variable named imageView holding a reference to your UIImageView.
I wrote a little sample to demonstrate how you could get this behavior. What this code does is creating a new CGAffineTransform matrix. This matrix has the same values as the identity transform matrix with one exception: the value at location [2,1]. This value is controlled by the c-parameter of the CGAffineTransformMake-function and controls the shearing along the x-axis. You can change the amount of shearing by setting shearValue.
The code:
Objective-C
CGFloat shearValue = 0.3f; // You can change this to anything you want
CGAffineTransform shearTransform = CGAffineTransformMake(1.f, 0.f, shearValue, 1.f, 0.f, 0.f);
[imageView setTransform:shearTransform];
Swift 5
let shearValue = CGFloat(0.3) // You can change this to anything you want
let shearTransform = CGAffineTransform(a: 1, b: 0, c: shearValue, d: 1, tx: 0, ty: 0)
imageView.transform = shearTransform
And here's what the shearTransform-matrix looks like:
[1 0 0]
[0.3 1 0]
[0 0 1]