How can I get all points in CGPath curve or quad curve - iphone

I have made a quad curve path using the method CGPathAddQuadCurveToPoint. I got the path perfectly. But, I want to know all the coordinate points which are participated in the path.
Is there a way to retrieve all the coordinate points in a path?
If not do u have any other solution for retrieving all the points in a curve mathematically.
Thanks in advance,
Vamshi

You can do this using the wykobi C++ library routine for cubic bezier curves. Wykobi's library supports quadratic Bezier curves also.
Of course as someone pointed out you don't want all the points (although not impossible, it would just take infinite time :). Wykobi makes it easy to get a certain number of points -- if your start, c1, c2, and end points (where c1, c2 are the control points) are exactly the same as the ones given to CGContextAddCurveToPoint then the points will lie perfectly on the line drawn by core graphics -- so you can do things like draw a pattern at several points on the path.
See: http://www.codeproject.com/Articles/22568/Computational-Geometry-C-and-Wykobi
Also, after I started using wykobi I heard that there is a similar, maybe even better library that is part of Boost, but have not checked it out yet.
I created a C++ Class WPoint as a bridge between wykobi points and CGPoints (C++ fun!). Here's some code (without WPoint, but you can imagine that it is exactly the same layout as a CGPoint so if you do the right cast you can convert easily.
NSMutableArray* result = [[NSMutableArray alloc] init];
wykobi::cubic_bezier<CGFloat,2> bezier;
bezier[0] = (WPoint)p1; // start point, in CG we did a CGMoveToPoint
bezier[1] = (WPoint)b1i; // control 1
bezier[2] = (WPoint)b2i; // control 2
bezier[3] = (WPoint)p2; // end point
std::vector<WPoint> point_list;
int numPoints = p2.dist(p3) * pointDensity;
// *** here's the magic ***
wykobi::generate_bezier(bezier,std::back_inserter(point_list), numPoints);
for (int i=0; i<numPoints; i++) {
CGPoint p = (CGPoint)(point_list[i]);
[result addObject:[NSValue valueWithCGPoint:p]];
}
// result has your points!
Here's a link to the Boost geometry library:
http://www.boost.org/doc/libs/1_47_0/libs/geometry/doc/html/geometry/introduction.html

Use CGContextSetLineDash
The purpose of this function is to create a dashed line, but You can use it to get smaller segments.
starting point of each segment can be treated as points.
CGSize bbSize = CGPathGetBoundingBox(path).size;
UIGraphicsBeginImageContext(bbSize);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 1.0);
CGContextAddPath(ctx, path);
CGContextSetLineDash(ctx, phase, lengths, count);
CGContextReplacePathWithStrokedPath(ctx);
result = CGContextCopyPath(ctx);
UIGraphicsEndImageContext();

If you want to work on the moveto, lineto, and curveto elements of the path, use CGPathApply. You pass this a pointer to a function in your program, and it calls that function once per element of the path.
Unfortunately, there's no way to just ask for each element like there is with AppKit's NSBezierPath. The function is the only way.
If you want to determine every pixel intersected by the path, too bad—that's not practical, and I can't even think of why you'd want that information. Some contexts, such as PDF contexts, don't even have pixels; in those cases, any question involving pixels is a non sequitur.

A quadratic curve is just that -- a curve. It's impossible to get a list of all of the points on it because there are infinitely many points, and it's not a simple line segment.
See Getting Information about Quartz Paths for a list of the functions you can use to query a CGPath object. Unfortunately, it seems like the most useful information you're going to get is with CGPathContainsPoint(), which only tells you if a given point is contained within the area of the path.

If not do u have any other solution for retrieving all the points in a curve mathematically.
What do you need them for, i.e. what problem are you trying to solve? If it is to intersect two curves, you can do this mathematically. Just set the two curve equations equal to each other and solve for the unknown.

I guess you're after something equivalent to the Java2D FlatteningPathIterator class. For example Java2D's path.getPathIterator(null, 1.0) returns an iterator of only 'lineTo' segments even if the original path had curveTo and quadTo, the double argument controls the 'flatness', giving you an easy way to calculate any point on the curve.
I'm searching for the same thing in Cocoa, but have found nothing. If you find a solution please let me know.
There are curve implmentations around (e.g. http://sourceforge.net/projects/curves/) that could be ported, but there's always a risk that if you don't use the same algorithm as Cocoa then there could be errors between your interpolation and the stroked NSBezierPath/CGPath.

Related

NSAffineTransform vs AffineTransform (Swift)

I've been having a little success with NSAffineTransform but have come across AffineTransform which is presumably more Swifty. However it doesn't have a concat method, so how do you use it? I'm aiming to draw the same little BezierPath rotated several times round the centre.
Sorry if it's obvious; I guess others might find this useful.
Here's how I see it. If you're drawing with an NSBezierPath you apply it to the path with myPath.transform(using: tr), e.g.:
let tr1 = AffineTransform(translationByX: 20, byY: 20)
bez = NSBezierPath()
// add some elements to the path here
bez.transform(using: tr1)
bez.stroke()
As I see it, the transform affects all the elements already in the path, but not any elements subsequently added. Transforms are cumulative, in that re-applying a transform, or applying another will affect all the elements so far entered.
You can also use the AffineTransform's own methods to transform individual points or sizes.

Over-segmentation of Watershed algorithm

I followed the 2-D Watershed example in Mathworks.com to separate the connected objects, like the image below:
The code is summarize as:
bw = imread('some_binary_image.tif');
D = -bwdist(~bw);
D(~bw) = -Inf;
L = watershed(D);
The result is:
The particle in the center has been separated into two. Are there any ways to avoid the over-segmentation here?
Thanks, lennon310, chessboard does work well for most of my images, but there are still some cases that it doesn't. For example, the following binary image:
Using chessboard will result in:
As I have hundreds of images, it seems that it is difficult to find one combination of parameters that work for all images. I am wondering if I need to combine the good results got from using chessboard, cityblock, etc...
Use max(abs(x1-x2),abs(y1-y2)) as the distance metric (chessboard), and use eight-connected neighborhood in watershed function:
bw=im2bw(I);
D = -bwdist(~bw,'chessboard');
imagesc(D)
D(~bw) = -Inf;
L = watershed(D,8);
figure,imagesc(L)
Result:
I have been dealing with the same issue for a while. For me, the solution was to use a marker based watershed method instead. Looks for examples on watershed method given on the Matlab Blog by Steve: http://blogs.mathworks.com/steve/
This method given by him worked best for me: http://blogs.mathworks.com/steve/2013/11/19/watershed-transform-question-from-tech-support/
Now, in an ideal world, we would be able to segment everything properly using a single method. But watershed does over or under-segment some particle, no matter which method you use (unless you manually give the markers). So, currently I am using a semi-automatic segmentation method; i.e., use watershed to segment the image as best as possible, and then take that image into MSPaint and edit it manually to correct whatever under/over-segmentation remains.
Region growing seems to have been used by some people in the past. But my image processing knowledge is limited so I can't help you out with that. It would be great if anyone could post something about how to use region-growing to segment such an image.
Hope this helps.

detect objects of any shape from image and color individual object

I am new to opencv and doing some like detect different objects from image and apply effects on individual object. I find edges, and using following code to get contours, but how how to proceed ahead i dont know. Any help ????
Thanks in advance
cv::Mat edges;
cv::Canny(gray, edges, 50, 150);
std::vector< std::vector<cv::Point> > c;
std::vector<cv::Point> points;
cv::findContours(edges, c, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat mask = cv::Mat::zeros(edges.rows, edges.cols, CV_8UC1);
for (size_t i=0; i<c.size(); i++)
{
for (size_t j = 0; j < c[i].size(); j++)
{
cv::Point p = c[i][j];
points.push_back(p);
// printf(" %d \t",p.x);
}
}
cv::Mat crop(inputFrame.rows, inputFrame.cols, CV_8UC3);
inputFrame.copyTo(outputFrame, mask);
Since you have chosen to identify the objects through their contour, I suggest that you continue with the "Generalized Hough Transform" (PDF). You will have to create reference contours for the objects, that you want to recognize (from every conceivable viewpoint).
Another option, that might be interesting to you is to look into segmentation algorithms in order to select certain objects in the image. Without knowing anything about the objects, that you are looking for and the images that you are processing, it is impossible to give good recommendations. There is no general purpose algorithm that works on every image (at least as far as I know).
To give you an idea about the state-of-the-art object class recognition, you can have a look at the PASCAL VOC Challenge. If your problem is simpler than the challenge (e.g. a small set of immutable objects, that stand in front of a one colored background), you should specify it in your question, and maybe someone can give you better suggestions.

How to determine intersection of CGPaths

My Question is something similar to this.
I have 2 CGPathRef and 1 will be moved by finger touch. I want to find that whether the 2 CGPathRef are intersected? That question was asked almost 2 years ago and I want to know whether something has been found in the mean time.
This is fairly old, but I found it looking for a similar solution, in my problem I wanted to find when a circle overlapped with a path (a special case of your question).
I solved this by using CGPathCreateCopyByStrokingPath to create a stroked version of the original path using the radius of the circle as the stroke width. If the center point of the circle overlaps the stroked path then the original path overlaps the circle.
BOOL CGPathIntersectsCircle(CGPathRef path, CGPoint center, CGFloat radius)
{
CGPathRef fuzzyPath;
fuzzyPath = CGPathCreateCopyByStrokingPath(path, NULL, radius,
kCGLineCapRound,
kCGLineJoinRound, 0.0);
if (CGPathContainsPoint(fuzzyPath, NULL, center, NO))
{
CGPathRelease(fuzzyPath);
return YES;
}
CGPathRelease(fuzzyPath);
return NO;
}
Edit: A minor bug where the fuzzyPath was not released.
I have written a small pixel based path collision detection API for CGPathRefs. It requires that you add a few source directories to your project, and it only works with ARC, but it should at least show you how one might do something like this. It basically draws the two paths on two separate contexts, and then does pixel-by-pixel checks to see if any pixels are on both paths. Obviously this would be slow to run every time the user drags their finger, but it certainly could be done once every half second or so, maybe not even on the main thread.
This is the easiest way I've found of doing something like this, and it may easily be that there's no better way, besides using lots of math.
The source on Github
A quick Youtube demo.
Generally speaking, finding the intersection of two arbitrary CGPaths is going to be very complex.
There are ways to do approximations. Checking the intersections of the bounding boxes is a good first step. You can also subdivide the curve and repeat the process to get better approximations. Another option is to flatten the paths and see if any of the line segments of the flattened paths intersect.
For the general case, however, things get very nasty very fast. Consider, for example, the fact that two cubic bezier segments (never mind an entire path... just one segment) can intersect with another segment at up to 6 points. The more segments in your path, the more potential intersections. There is also the problem of degenerate bezier curves where a segment has a cusp that just touches one point of another segment. Does that count as an intersection? (sometimes yes, sometimes no)
It's not clear from your question, but you might also want to consider the intersections of the strokes that are applied to the curves, and correctly account for line joins and miters. That that gets even harder. Macromedia FreeHand (a drawing program similar to Adobe Illustrator) had a very large, complex, intensely mathematical library for discovering arbitrary bezier curve intersections. The problem is not easily solved.
To find the intersection of two CAShapeLayers, we can use below method, CAShapeLayer won't return frame. But we can get the refPath frame using CGPathGetBoundingBox. But this one will give the frame in rectangle.I thing you may understand.
if (CGRectIntersectsRect(CGPathGetBoundingBox(layer.path), CGPathGetBoundingBox(layer.path)))

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?