pdfSweep with rotated rectangle (itext7) - itext

i have the requirement to perform a redaction in itext7. We have several rectangles which have been selected by the user. Some of these have been rotated. I have not found the ability to rotate rectangles in itext7. Usually, how we draw "rotated" rectangles is to perform some mathematical operations on a "fake" rectangle we draw in the code, and then draw them either using a series of lines like so:
if (rect.mRotation > 0)
{
r.Rotate(DegreeToRadian(rect.mRotation));
}
c.MoveTo(r.TopLeft.X, r.TopLeft.Y);
c.LineTo(r.TopRight.X, r.TopRight.Y);
c.LineTo(r.BottomRight.X, r.BottomRight.Y);
c.LineTo(r.BottomLeft.X, r.BottomLeft.Y);
c.LineTo(r.TopLeft.X, r.TopLeft.Y);
c.Stroke();
In the case of images, or something similar, we are unable to do the above. In this case we use an affinetransform to simulate the movement, which is applied to the image before it is added to the document. Both of the previous methods work perfectly.
Unfortunately for us, the pdfSweep tool only accepts (iText.Kernel.Geom) rectangles. We are looking for a way to be able to still pass an iText.Kernel.Geom.Rectangle which has had transforms applied (ie. a rectangle which has been rotated). We have tried setting the llx/urx values manually using the setBBox method, but this wont affect the rotation.
Does anyone know how we can go about redacting items over a given rectangular area which has been rotated?
Thanks

Related

CIDetector to detect any objects bounding box

Imagine having an array of images like these
Background is always white(even in the 3rd pic, main object there is that big brown rectangle with shapes inside)
No matter of given type of the image you would need to:
1) find main object boundary rectangle
2) crop it out like this
3) and place it in the center of a blank square image.
How would you achieve this? I already know how to crop out anything knowing rectangle and place it anywhere but I just need to know which way would be the best to make the 1st step.
Vision API can detect rectangles, faces and barcodes, but it seems what I need is even more simple.
I just need to find leftest, rightest, top and bottom non-white pixels and it will be my bounds.
Is there any way except iterating pixelBuffer for each pixel?
What is the type of these images? UIImage? CAShapeLayer? In most cases, you should be able to get the .frame from each image in the array, which will give you a CGRect the X and Y origin coordinates, as well as height and width dimensions. You should also have access to .midX and .midY coordinates, or .center.x and .center.y to find the midpoint you're looking for. Unless what you're talking about is taking in a flattened bitmap like a .jpg or .png and running some shape detection on the contents, in which case you would need something like Vision to accomplish what you're trying to do.

iphone - apply CGAffineTransformRotate on point in hittest

I've got an image that's allowed to be rotated and scaled by the user.
Every time the user clicks the image I try to figure out if the point is transparent or not.
If it's transparent I return null in my view's HitTest, if it's not transparent I return the view. Problems start when user rotates the image. In my hitTest method, I need to transform the point according to the current view's rotation. Otherwise the point will indicate an irrelevant location on the view (and the image).
How do I do that?
Thank you very much.
This CGAffineTransform Reference might help:
CGPointApplyAffineTransform
CGRectApplyAffineTransform
and
CGSizeApplyAffineTransform
But before you start thinking that you need to perform the mapping by hand, I would suggest to give it a try 'as if' the current transformation was CGAffineIdentity, and code your coordinate detection accordingly. You might be surprised by the results ...
My own experience says that it looks like when you get your points from UITouch locationIn_SomeView_ the inverted matrix of SomeView is applied to the point before it is handed back to you.
So, you probably don't need any of the CGxxxApplyAffineTransform unless you generate the points yourself, outside of the events system.

Efficiently draw CGPath on CATiledLayer

How would I efficiently draw a CGPath on a CATiledLayer? I'm currently checking if the bounding box of the tile intersects the bounding box of the path like this:
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
CGRect boundingBox = CGPathGetPathBoundingBox(drawPath);
CGRect rect = CGContextGetClipBoundingBox(context);
if( !CGRectIntersectsRect(boundingBox, rect) )
return;
// Draw path...
}
This is not very efficient as the drawLayer:inContext: is called multiple times from multiple threads and results in drawing the path many times.
Is there a better, more efficient way to do this?
The simplest option is to draw your curve into a large image and then tile the image. But if you're tiling, it probably means the image would be too large, or you would have just drawn the path in the first place, right?
So you probably need to split your path up. The simplest approach is to split it up element by element using CGPathApply. For each element, you can check its bounding box and determine if that element falls in your bounds. If not, just keep track of the last end point. If so, then move to the last end point you saw and add the element to a new path for this tile. When you're done, each tile will draw its own path.
Technically you will "draw" things that go outside your bounds here (such as a line that extends beyond the tile), but this is much cheaper than it sounds. Core Graphics is going to clip single elements very easily. The goal is to avoid calculating elements that are not in your bounding box at all.
Be sure to cache the resulting path. You don't need to calculate the path for every tile; just the ones you're drawing. But avoid recalculating it every time the tile draws. Whenever the data changes, dump your cache. If there are a very large number of tiles, you can also use NSCache to optimize this even better.
You don't show where the path gets created. If possible, you might try building the path up in the -drawLayer:inContext: method, only creating the portion of it needed for the tile being drawn.
As with all performance problems, you should use Instruments to profile your code and find out exactly where the bottlenecks are. Have you tried that already, and if so, what did you find?
As a side note, is there a reason you're using CGPath instead of UIBezierPath? From Apple's documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”

Placing objects around image highlights- coco2d /opengl / coregraphics?

I would like to extract the white areas/bright areas of an image and place a custom objects in those areas. I need to know which framework to work with. if anyone has done something similar, I would appreciate an answer. I know how to get Pixel values, however the hard part is creating a Bloom/star effect in those highlighted areas.
You could make a mask where the luminance value was above a threshold, then blur or whatever the mask and composite above the image.

How to determine if iPad user taps within an irregular shaped image?

I've hooked up a UITapGestureRecognizer to a UIImageView containing the image I'd like to display on an iPad screen and am able to consume the user taps just fine. However, my image is that of a hand on a table and I'd like to know if the user has tapped on the hand or on the table part of the image. I can get the x,y coordinates of the user tap with CGPoint tapLocation = [recognizer locationInView:self.view]; but I'm at a loss for how to map that CGPoint to, say, the region of the image that contains the hand vs. the region that contains the table. Everything I've read so far deals with determining if a CGPoint is in a particular rectangular area, but what if you need to determine if that CGPoint is located in the boundaries of a more irregular shape? Is that even possible? Any suggestions or just pointing me in the right direction would be a big help. Thanks!
You could use pointInside:withEvent: to define the hit area programmatically.
To elaborate, you just take the point and evaluate to see if it falls in the area you're after with a series of if statements. If it does, return TRUE. If it doesn't, return FALSE. If this is related to this post, then you could use a circular conditional to compare the distance of the point to the center of your circle using Pythagorean Theorem.
late to the party,
but the core tool you want here is a "point in polygon" routine.
this is a generic approach, independent of iOS.
google has lots of info,
but the general approach is:
1) define your closed polygon.
- it sounds like this might be a bit of work in your case.
2) choose any point not equal to your original point.
(yes, any point)
3) for each edge in the polygon,
determine if the ray from your original point through the seconds point intersects with that polygon edge.
- this requires a line-segment-intersect-ray routine, also available on the 'tubes.
4) if the number of intersections is odd, it's inside the polygon.
if the count is even, it's outside.
for general geometry-type issues,
i highly recommend Paul Bourke: http://local.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/
You can use a bounding rectangle that covers most or all of the hand.
If the user is using his finger to tap either the hand or the table, I doubt that you want him or her to be extremely precise with the tap.
An extension of the bounding rectangle answer,
you could define several smaller bounding rectangles that would approximate a hand without covering the rest of the screen.
OR
you could use a list of rectangles, for each of your objects and put the hand at the end of the list. In this case, if you had a tap on button X on the top right hand of the screen which is technically inside the hand rectangle, it would choose the button X because that rectangle is found first.
define the shape by a black and white bitmap (1 bit per pixel). Check if the particular bit is set. This would eat a lot of memory if you had a lot of large shapes, but for one bitmap with a hand, it should not be a big deal.
define the shape as a polygon. Then you need to do point-in-polygon test. Wikipedia has a wonderful article on this, with links to code here: http://en.wikipedia.org/wiki/Point_in_polygon
iPad libraries might have this already implemented. Sorry, I cannot help you there, not an iPad developer.