I am working on a app that detects face and marks eyes and mouth on image.I have detected the face ,eyes and mouth using CIDetector but the position of eyes and face that it returns is with respect to original image not according to the view of imageview on which i have to mark faces and eyes i.e for example i have a image of 720 *720 the position of face and eyes that it retuns is with respect to size of 720 *720.But the problem is ..i have to show eyes and face annotated on a image view of size 320 * 320. please advice me how can i map the postion of face returned by CIdetector to position of face on image view.
You can solve this by considering the imageview size to image size ratio.
Following is something really simple and could be used to solve your problem.
//'returnedPoint' is the position of eye returned by CIDetector
CGFloat ratio = 320/ 720.0;
//It is like, CGFloat ratio = yourImageView.frame.size.width/ yourImage.size.width;
CGPoint pointOnImageView = CGPointMake((ratio * returnedPoint.x), (ratio * returnedPoint.y));
Related
I'm trying to use AR camera with face tracking to capture some vertices from the facemesh(and mask the area), and pass the image to opencv for unity to do further processing.
Vector3 screenPosition = arCamera.GetComponent<Camera>().WorldToScreenPoint(face.transform.position + face.vertices[0])
I'm using this but this returns the position relative to the screen which has a different aspect ratio than the image from "cameraManager.TryAcquireLatestCpuImage". (2340 x 1080) vs (640 x 480)
I tried looking every where how to transform the position from world to cpu image and tried to map the screen coordinates to the cpu image using displaymatrix and projectionmatrix but no luck.
Any solution, would be appreciated!!
I am trying to get the image from the current ARFrame by using:
if let imageBuffer = sceneView.session.currentFrame?.capturedImage {
let orientation = UIApplication.shared.statusBarOrientation
let viewportSize = sceneView.bounds.size
let transformation = sceneView.session.currentFrame?.displayTransform(for: orientation, viewportSize: viewportSize)
let ciImage = CIImage(cvPixelBuffer: imageBuffer).transformed(by: transformation)
}
For landscape, it works great. For portrait, I get the image at a wrong angle (rotated by 180). Any idea why?
Output:
Expected:
At first I should say that it definitely is an unpleasant bug.
A problem is, when you convert an Portrait image, what ARFrame contains, to CIImage or CGImage, it loses its orientation and rotates it 180 degrees CCW. This issue affects only Portrait images. Landscape ones are not affected at all.
This happens because Portrait image doesn't have an info about its orientation at conversion stage, and thus, an image in portrait remains in portrait mode even though it's converted to CIImage or CGImage.
To fix this you should compare "standard" landscape's width/height with a "non-standard" portrait's width/height, and if these values are different, rotate an image to 180 degrees CW (or apply orientation case .portraitUpsideDown).
Hope this helps.
Coordinate systems
We need to be very clear about which coordinate system we are working in.
We know UIKit has (0,0) in the top left, and (1,1) in the top right, but this is not true of CoreImage:
Due to Core Image's coordinate system mismatch with UIKit... (see here)
or Vision (including CoreML recognition):
Vision uses a normalized coordinate space from 0.0 to 1.0 with lower
left origin. (see here)
However, displayTransform uses the UIKit orientation:
A transform matrix that converts from normalized image coordinates in
the captured image to normalized image coordinates that account for
the specified parameters. Normalized image coordinates range from
(0,0) in the upper left corner of the image to (1,1) in the lower
right corner. (See here)
So, if you load a CVPixelBuffer into a CIImage, and then try to apply
the displayTransform matrix, it's going to be flipped (as you can see).
But, also it messes up the image.
What display transform does
Display transform appears to be mainly for Metal, or other lower level drawing routines which tend to match the core image orientation.
The transformation scales the image and shifts it so it "aspect fills" within the specified bounds.
If you are going to display the image in a UIImageView then it will be reversed because their orientations differ. But furthermore, the image view does the aspect fill transformation for you,
so there is no reason to shift or scale, and thus no reason to use displayTransform at all. Just rotate the image to the proper orientation.
// apply an orientation. You can easily make a function
// which converts the screen orientation to this parameter
let rotated = CIImage(cvPixelBuffer: frame.capturedImage).oriented(...)
imageView.image = UIImage(ciImage: rotated)
If you want to overlay content on the image, such as by adding subviews to the UIImageView, then displayTransform can be helpful.
It will translate image coordinates (in UIKit orientation), into coordinates in the image view
which line up with the displayed image (which is shifted and scaled due to aspect fill).
I have image of sprite having size 90 X 120 and I want to load that image in unity,
Now when I am loading that image it is look like stretch in height and width I am setting scale value of the object but It is look like not proper as I am just doing error and trial method.
can any one tell me how to calculate so the I can load proper image of sprite in scene.
means if my main camera size is 100 and type is orthographic.
and when I create sprite object below are setting which is been given to object
Texture type = Sprite
Sprite Mode = single
Pixel To units = 100
pivot = center
Max size = 1024
Format = 16 Bits
Now can any one tell me if any setting need to change or how to calculate the scale size of sprite as per above info so that sprite look good
I'm working on an app in which i have to detect left eye, right eye, and mouth position.
I have an imageView on my self.view and imageView contains a face image, now I want to get both eyes and mouth coordinates. I have seen 2-3 sample codes for this but all are approximately same in all codes we have to invert my view for matching the coordinates which I don't want because my view have some other controls. And one more thing they all are using
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"image.png"]];
but my imageView has frame and i cant init it with image. When I do so I found faceFeature's eyes and mouth coordinates wrong.
I had started my code from this sample code but in this also view is being invert its Y coordinate.
Can any one help me how can i detect the face eyes and mouth position on UIImageView's image without invert my self.view.
Please let me know if my question is not clear enough.
The trick here is to transform the returned points and bounds from CIDetector to your coordinates, instead of flipping your own view. CIImage has origin at the bottom left which you will need to transform to the top left
int height = CVPixelBufferGetHeight(pixelBuffer);
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -1 * height);
/* Do your face detection */
CGRect faceRect = CGRectApplyAffineTransform(feature.bounds, transform);
CGPoint mouthPoint = CGPointApplyAffineTransform(feature.mouthPosition, transform);
// Same for eyes, etc
For your second question about UIImageView, you just have to do
imageview.image = yourImage
After you have initialized your imageview
Worked it out! - edited the class to have a faceContainer which contains all of the face objects (the mouth and eyes), then this container is rotated and thats all. Obviously this is very crude, but it does work. Here is a link, http://www.jonathanlking.com/download/AppDelegate.m. Then replace the app delegate from the sample code with it.
-- OLD POST --
Have a look at this Apple Documentation, and slide 42 onwards from this apple talk. Also you should probably watch the talk as it has a demo of what you are trying to achieve, it's called "Using Core Image on iOS & Mac OS X" and is here.
I have a view that is 320x460 (standard iPhone screen) that I want to draw as if it were in landscape mode, though the phone is not oriented that way. This seems like a simple task, which I tried to solve by creating a subview of size 460x320, and then rotating it 90 degrees through setting the view transformation. This worked, except that the rotated view was not centered correctly. To 'fix' this I added a translation transformation, which ended up looking like this:
CGAffineTransform rotate = CGAffineTransformMakeRotation(M_PI / 2.0);
[landscapeView setTransform: CGAffineTransformTranslate(rotate, 70.0, 70.0)];
I don't mind having some adjustment transformation, but I have no clue where the magic number 70 came in. I just played around with it until the edges matched up correctly. How can I get rid of them, either by eliminating the translation transformation, or by deriving the number from some meaningful math related to the height and width?
Just a hunch, but I'm guessing prior to the transform that you're setting (or defaulting) the center of landscapeView to (160, 230). When it rotates, it keeps the upper left position fixed.
(320 px screen width - 460 px width) = -140 px. Divide that in half since it's centered, and you get -70 px. Same idea with the vertical.
70 is the difference between the width and height, divided by two. (460 - 320) / 2. The division by two is what centers the view.