How to rotate an image with content on the same spot? - matlab

I have an image like below
then I want to rotate it, but I don't want its position to be changed.
For example the output should look like below
If I do imrotate, it will change its position. Is there any other way to rotate this without changing its position?

The imrotate function rotates the entire image around the specified angle. What you want is to rotate only a part of the image. For that you'll have to specify which part you want to rotate. Formally speaking, this is the rectangle in which this symbol is located.
The coordinates of this rectangle can be found by selecting all rows and columns, where any pixel is black. This can be done by taking the sum over all rows, finding the first and last non-zero entries there, and doing the same over all columns.
sx=find(sum(im==0,1),1,'first');
ex=find(sum(im==0,1),1,'last');
sy=find(sum(im==0,2),1,'first');
ey=find(sum(im==0,2),1,'last');
The relevant part of the image is then
im(sy:ey,sx:ex)
Now you can rotate only this part of the image and save it to the same location within the whole image:
im(sy:ey,sx:ex) = imrotate(im(sy:ey,sx:ex),180);
with the desired result:
Note: this will only work for 180° angles, such as the example you provided. If you rotate by any other angle, e.g. 90° or even arbitrary angles, such as 23°, the output of imrotate will not have the same size as the input, so the assignment im(sy:ey,sx:ex) = ... will always throw an error.

Related

Camera ScreenToWorldPoint return biased value When it set output texture

There are two cameras, the only difference is one set output to a renderImage.
Camera1 calculation wrong result with ScreenToWorldPoint ---In other words, there is an error on the coordinate transformation
Camera2 is correct with ScreenToWorldPoint.
This is a problem I encounter on the job, I built a camera completely copy of the original. Just didn't set the output.
Believe me, after this operation, the result is correct, at least from the visual effect.
Why set up the output will have an impact on the coordinate transformation. The position of the camera and other attributes are no different.
When you set a targetTexture, the texture is the screen. The size of the texture will affect the calulation of screen position.
You may use ViewportToWorldPoint instead, because a viewport coordinate is always from (0,0) to (1,1)
var viewPoint1 = camera1.ScreenToViewportPoint(mousePosition);
var worldPoint2 = camera2.ViewportToWorldPoint(viewPoint1);

pdfSweep with rotated rectangle (itext7)

i have the requirement to perform a redaction in itext7. We have several rectangles which have been selected by the user. Some of these have been rotated. I have not found the ability to rotate rectangles in itext7. Usually, how we draw "rotated" rectangles is to perform some mathematical operations on a "fake" rectangle we draw in the code, and then draw them either using a series of lines like so:
if (rect.mRotation > 0)
{
r.Rotate(DegreeToRadian(rect.mRotation));
}
c.MoveTo(r.TopLeft.X, r.TopLeft.Y);
c.LineTo(r.TopRight.X, r.TopRight.Y);
c.LineTo(r.BottomRight.X, r.BottomRight.Y);
c.LineTo(r.BottomLeft.X, r.BottomLeft.Y);
c.LineTo(r.TopLeft.X, r.TopLeft.Y);
c.Stroke();
In the case of images, or something similar, we are unable to do the above. In this case we use an affinetransform to simulate the movement, which is applied to the image before it is added to the document. Both of the previous methods work perfectly.
Unfortunately for us, the pdfSweep tool only accepts (iText.Kernel.Geom) rectangles. We are looking for a way to be able to still pass an iText.Kernel.Geom.Rectangle which has had transforms applied (ie. a rectangle which has been rotated). We have tried setting the llx/urx values manually using the setBBox method, but this wont affect the rotation.
Does anyone know how we can go about redacting items over a given rectangular area which has been rotated?
Thanks

CIDetector to detect any objects bounding box

Imagine having an array of images like these
Background is always white(even in the 3rd pic, main object there is that big brown rectangle with shapes inside)
No matter of given type of the image you would need to:
1) find main object boundary rectangle
2) crop it out like this
3) and place it in the center of a blank square image.
How would you achieve this? I already know how to crop out anything knowing rectangle and place it anywhere but I just need to know which way would be the best to make the 1st step.
Vision API can detect rectangles, faces and barcodes, but it seems what I need is even more simple.
I just need to find leftest, rightest, top and bottom non-white pixels and it will be my bounds.
Is there any way except iterating pixelBuffer for each pixel?
What is the type of these images? UIImage? CAShapeLayer? In most cases, you should be able to get the .frame from each image in the array, which will give you a CGRect the X and Y origin coordinates, as well as height and width dimensions. You should also have access to .midX and .midY coordinates, or .center.x and .center.y to find the midpoint you're looking for. Unless what you're talking about is taking in a flattened bitmap like a .jpg or .png and running some shape detection on the contents, in which case you would need something like Vision to accomplish what you're trying to do.

How to detect the any 4 sides polygen in the image and adjust it to rectangle?

One TV screen recognition project, i need to clip the TV Screen from one image.
The TV screen actually is rectangle. But It's obvious that the TV screen is out of shape in the image from phone camera. My question are:
How to detect the any 4 sides polygen(it's not rectangle) in the image.
After i know the polygen area on the image ,how to retrieve the area to Mat.
After solve quest2, How to convert the Mat of 4 sides polygen to rectangle Mat which is fixed W/H radio.
It's very helpful that give some code sample to reference.
Thanks your answers!
if you want to detect the edges of your TV screen you can use some border
detection (like Canny) and then use Hough transform to obtained the lines.
If you then extract the points corresponding to the intersection of the lines
you can create an homography matrix H (3x3). Finally, using this homgraphy you can
"deform" your original image to a reference frame (in our case the rectangle
with a given aspect ratio). The homography is a transformation from plane
to plane, so it's exactly what you will need here.
If your going to use OpenCV (which is always a good choice!),
here are the functions that you could use:
Canny() - find edges in the image
HoughLines() - detect lines
findHomography() - this function finds from a set of correspondances,
the homography matrix. In your case, you will need to pass the method
as 0.
warpPerspective() - the function that your going to use to "deform"
the image to a reference frame.
Obviously, you can find similar functions for MATLAB and others...
I hope this helps you.

iphone cocoa : how to drag an image along a path

I am trying to figure out how can you drag an image while constraining its movement along a certain path.
I tried several tricks including animation along a path, but couldn't get the animation to play and pause and play backwards - so that seems out of the question.
Any ideas ? anyone ?
What you're basically trying to do is match finger movement to a 'translation' transition.
As the user touches down and starts to move their finger you want to use the current touch point value to create a translation transform which you apply to your UIImageView. Here's how you would do it:
On touch down, save the imageview's starting x,y position.
On move, calculate the delta from old point to new one. This is where you can clamp the values. So you can ignore, say, the y change and only use the x deltas. This means that the image will only move left to right. If you ignore the x and use y, then it only moves up and down.
Once you have the 'new' calculated/clamped x,y values, use it to create a new transform using CGAffineTransformMakeTranslation(x, y). Assign this transform to the UIImageView. The image moves to that place.
Once the finger lifts, figure out the delta from the original starting x,y, point and the lift-off point, then adjust the ImageView's bounds and reset the transform to CGAffineTransformIdentity. This doesn't move the object, but it sets it so subsequent accesses to the ImageView use the actual position and don't have to keep adjusting for transforms.
Moving along on a grid is easy too. Just round out the x,y values in step 2 so they're a multiple of the grid size (i.e. round out to every 10 pixel) before you pass it on to make the translation transform.
If you want to make it extra smooth, surround the code where you assign the transition with UIView animation blocks. Mess around with the easing and timing settings. The image should drag behind a bit but smoothly 'rubber-band' from one touch point to the next.
See this Sample Code : Move Me