How would I draw something like this in Core Graphics - iphone

I want to be able to draw using this as my stroke. How would I do this as efficient as possible, and on the fly drawing, I was thinking CGPatternRef, but I really don't know how to do that.
Edit:
It does not need to warp to the path. I just coultn't fix that issue in Illustrator.

Does the Apple doc help?
See the section in Quartz 2D Programming Guide, How Patterns Work, or Patterns in general.
Here is how to draw a start (from the above docs), PSIZE is the star size:
static void MyDrawStencilStar (void *info, CGContextRef myContext)
{
int k;
double r, theta;
r = 0.8 * PSIZE / 2;
theta = 2 * M_PI * (2.0 / 5.0); // 144 degrees
CGContextTranslateCTM (myContext, PSIZE/2, PSIZE/2);
CGContextMoveToPoint(myContext, 0, r);
for (k = 1; k < 5; k++) {
CGContextAddLineToPoint (myContext,
r * sin(k * theta),
r * cos(k * theta));
}
CGContextClosePath(myContext);
CGContextFillPath(myContext);
}
Just add the curve transformation and at each point draw the star.
Here is a simple C code to calculate points on cubic bezier curve.

You could try importing your Illustrator document into this application: Opacity, and then doing an "Export as source code".
See here: http://likethought.com/opacity/workflow/ for more information.

You will need to walk the path and compute coordinates of the curve at equal distances (along the path). This is inexpensive. Rotation gets a little hairy (rotating about the tangent), but overall, this is a basic bézier curve problem. Simply solve the curve equation and render a star at each vertex.
There's nothing built-in that will do it all. You can query points for intersection, but only rendering solves the curve. This is encapsulated by CoreGraphics, so you can't pull it out or take advantage of what they already have.
This is something you'll should consider writing yourself. It's not difficult; honest. Only a basic understanding of calculus is required... if at all. If you write it yourself, you can even add in the warping effects, if you like.

This looks like a job for OpenGL. CoreGraphics doesn't offer any simple way that I know of to warp the stars according to their proximity to a path.

Related

What is the best way to smoothen a noisy image filter?

I am currently trying to recreate the watercolor effect of Instagram in Unity.
Instagram: https://i.imgur.com/aMyEhjS.jpg
My approach: https://i.imgur.com/9zIOQ7k.jpg
My approach is rather noisy. This is the main code which creates the effect:
float3 stepColor(float3 col){
const float3 lumvals = float3(0.5,0.7,1.0);
float3 hsv = rgb2hsv(col);
if(hsv.z <= 0.33){
hsv.z = lumvals.x;
}
else if(hsv.z <= 0.55){
hsv.z = lumvals.y;
}
else{
hsv.z = lumvals.z;
}
return hsv2rgb(hsv);
}
Which algorithm would be suitable here to denoise and smoothen the end result as Instagram is achieving it?
Watercolor filters use something called mean shift analysis to average out the image while preserving features. It is an iterative approach where you make clusters of pixels gravitate towards their mean value.
Here is a java code example:
https://imagej.nih.gov/ij/plugins/mean-shift.html
Here is a paper which describes the watercolor effect and its components (including edge darkening):
http://maverick.inria.fr/Publications/2006/BKTS06/watercolor.pdf
There is a github project with CUDA and OpenCL implementations, but if you want to actually understand the algorithm, i'd refer you to this page which explains it quite neatly using python code:
http://www.chioka.in/meanshift-algorithm-for-the-rest-of-us-python/
Another option from the top of my head is to use a Sobel/Roberts cross filter to detect all the borders in the image, and then use the inverse of this value as a mask for a gaussian blur. It won't give you the same nice layering effect though.

iPhone Image Processing with Accelerate Framework and vDSP

UPDATE: Please see additional question below with more code;
I am trying to code a category for blurring an image. My starting point is Jeff LaMarche's sample here. Whilst this (after the fixes suggested by others) works fine, it is an order of magnitude too slow for my requirements - on a 3GS it takes maybe 3 seconds to do a decent blur and I'd like to get this down to under 0.5 sec for a full screen (faster is better).
He mentions the Accelerate framework as a performance enhancement so I've spent the last day looking at this, and in particular vDSP_f3x3 which according to the Apple Documenation
Filters an image by performing a
two-dimensional convolution with a 3x3
kernel; single precision.
Perfect - I have a suitable filter matrix, and I have an image ... but this is where I get stumped.
vDSP_f3x3 assumes image data is (float *) but my image comes from;
srcData = (unsigned char *)CGBitmapContextGetData (context);
and the context comes from CGBitmapContextCreate with kCGImageAlphaPremultipliedFirst, so my srcData is really ARGB with 8 bits per component.
I suspect what I really need is a context with float components, but according to the Quartz documentation here, kCGBitMapFloatComponents is only available on Mac OS and not iOS :-(
Is there a really fast way using the accelerate framework of converting the integer components I have into the float components that vDSP_f3x3 needs? I mean I could do it myself, but by the time I do that, then the convolution, and then convert back, I suspect I'll have made it even slower than it is now since I might as well convolve as I go.
Maybe I have the wrong approach?
Does anyone have some tips for me having done some image processing on the iphone using vDSP? The documentation I can find is very reference orientated and not very newbie friendly when it comes to this sort of thing.
If anyone has a reference for really fast blurring (and high quality, not the reduce resolution and then rescale stuff I've seen and looks pants) that would be fab!
EDIT:
Thanks #Jason. I've done this and it is almost working, but now my problem is that although the image does blur, on every invocation it shifts left 1 pixel. It also seems to make the image black and white, but that could be something else.
Is there anything in this code that leaps out as obviously incorrect? I haven't optimised it yet and it's a bit rough, but hopefully the convolution code is clear enough.
CGImageRef CreateCGImageByBlurringImage(CGImageRef inImage, NSUInteger pixelRadius, NSUInteger gaussFactor)
{
unsigned char *srcData, *finalData;
CGContextRef context = CreateARGBBitmapContext(inImage);
if (context == NULL)
return NULL;
size_t width = CGBitmapContextGetWidth(context);
size_t height = CGBitmapContextGetHeight(context);
size_t bpr = CGBitmapContextGetBytesPerRow(context);
int componentsPerPixel = 4; // ARGB
CGRect rect = {{0,0},{width,height}};
CGContextDrawImage(context, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
srcData = (unsigned char *)CGBitmapContextGetData (context);
if (srcData != NULL)
{
size_t dataSize = bpr * height;
finalData = malloc(dataSize);
memcpy(finalData, srcData, dataSize);
//Generate Gaussian kernel
float *kernel;
// Limit the pixelRadius
pixelRadius = MIN(MAX(1,pixelRadius), 248);
int kernelSize = pixelRadius * 2 + 1;
kernel = malloc(kernelSize * sizeof *kernel);
int gauss_sum =0;
for (int i = 0; i < pixelRadius; i++)
{
kernel[i] = 1 + (gaussFactor*i);
kernel[kernelSize - (i + 1)] = 1 + (gaussFactor * i);
gauss_sum += (kernel[i] + kernel[kernelSize - (i + 1)]);
}
kernel[(kernelSize - 1)/2] = 1 + (gaussFactor*pixelRadius);
gauss_sum += kernel[(kernelSize-1)/2];
// Scale the kernel
for (int i=0; i<kernelSize; ++i) {
kernel[i] = kernel[i]/gauss_sum;
}
float * srcAsFloat,* resultAsFloat;
srcAsFloat = malloc(width*height*sizeof(float)*componentsPerPixel);
resultAsFloat = malloc(width*height*sizeof(float)*componentsPerPixel);
// Convert uint source ARGB to floats
vDSP_vfltu8(srcData,1,srcAsFloat,1,width*height*componentsPerPixel);
// Convolve (hence the -1) with the kernel
vDSP_conv(srcAsFloat, 1, &kernel[kernelSize-1],-1, resultAsFloat, 1, width*height*componentsPerPixel, kernelSize);
// Copy the floats back to ints
vDSP_vfixu8(resultAsFloat, 1, finalData, 1, width*height*componentsPerPixel);
free(resultAsFloat);
free(srcAsFloat);
}
size_t bitmapByteCount = bpr * height;
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, finalData, bitmapByteCount, &providerRelease);
CGImageRef cgImage = CGImageCreate(width, height, CGBitmapContextGetBitsPerComponent(context),
CGBitmapContextGetBitsPerPixel(context), CGBitmapContextGetBytesPerRow(context), CGBitmapContextGetColorSpace(context), CGBitmapContextGetBitmapInfo(context),
dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGContextRelease(context);
return cgImage;
}
I should add that if I comment out the vDSP_conv line, and change the line following to;
vDSP_vfixu8(srcAsFloat, 1, finalData, 1, width*height*componentsPerPixel);
Then as expected, my result is a clone of the original source. In colour and not shifted left. This implies to me that it IS the convolution that is going wrong, but I can't see where :-(
THOUGHT: Actually thinking about this, it seems to me that the convolve needs to know the input pixels are in ARGB format as otherwise the convolution will be multiplying the values together with no knowledge about their meaning (ie it will multiple R * B etc). This would explain why I get a B&W result I think, but not the shift. Again, I think there might need to be more to it than my naive version here ...
FINAL THOUGHT: I think the shifting left is a natural result of the filter and I need to look at the image dimensions and possibly pad it out ... so I think the code is actually working OK given what I've fed it.
While the Accelerate framework will be faster than simple serial code, you'll probably never see the greatest performance by blurring an image using it.
My suggestion would be to use an OpenGL ES 2.0 shader (for devices that support this API) to do a two-pass box blur. Based on my benchmarks, the GPU can handle these kinds of image manipulation operations at 14-28X the speed of the CPU on an iPhone 4, versus the maybe 4.5X that Apple reports for the Accelerate framework in the best cases.
Some code for this is described in this question, as well as in the "Post-Processing Effects on Mobile Devices" chapter in the GPU Pro 2 book (for which the sample code can be found here). By placing your image in a texture, then reading values in between pixels, bilinear filtering on the GPU gives you some blurring for free, which can then be combined with a few fast lookups and averaging operations.
If you need a starting project to feed images into the GPU for processing, you might be able to use my sample application from the article here. That sample application passes AVFoundation video frames as textures into a processing shader, but you can modify it to send in your particular image data and run your blur operation. You should be able to use my glReadPixels() code to then retrieve the blurred image for later use.
Since I originally wrote this answer, I've created an open source image and video processing framework for doing these kinds of operations on the GPU. The framework has several different blur types within it, all of which can be applied very quickly to images or live video. The GPUImageGaussianBlurFilter, which applies a standard 9-hit Gaussian blur, runs in 16 ms for a 640x480 frame of video on the iPhone 4. The GPUImageFastBlurFilter is a modified 9-hit Gaussian blur that uses hardware filtering, and it runs in 2.0 ms for that same video frame. Likewise, there's a GPUImageBoxBlurFilter that uses a 5-pixel box and runs in 1.9 ms for the same image on the same hardware. I also have median and bilateral blur filters, although they need a little performance tuning.
In my benchmarks, Accelerate doesn't come close to these kinds of speeds, especially when it comes to filtering live video.
You definitely want to convert to float to perform the filtering since that is what the accelerated functions take, plus it is a lot more flexible if you want to do any additional processing. The computation time of a 2-D convolution (filter) will most likely dwarf any time spent in conversion. Take a look at the function vDSP_vfltu8() which will quickly convert the uint8 data to float. vDSP_vfixu8() will convert it back to uint8.
To perform a blur, you are probably going to want a bigger convolution kernel than 3x3 so I would suggest using the function vDSP_imgfir() which will take any kernel size.
Response to edit:
A few things:
You need to perform the filtering on each color channel independently. That is, you need to split the R, G, and B components into their own images (of type float), filter them, then remultiplex them into the ARGB image.
vDSP_conv computes a 1-D convolution, but to blur an image, you really need a 2-D convolution. vDSP_imgfir essentially computes the 2-D convolution. For this you will need a 2-D kernel as well. You can look up the formula for a 2-D Gaussian function to produce the kernel.
Note: You actually can perform a 2-D convolution using 1-D convolutions if your kernel is seperable (which Gaussian is). I won't go into what that means, but you essentially have to perform 1-D convolution across the columns and then perform 1-D convolution across the resulting rows. I would not go this route unless you know what you are doing.
So answering my own question with Jason's excellent help, the final working code fragment is provided here for reference in case it helps anyone else. As you can see, the strategy is to split the source ARGB (I'm ignoring A for performance and assuming the data is XRGB) into 3 float arrays, apply the filter and then re-multiplex the result.
It works a treat - but it is achingly slow. I'm using a large kernel of 16x16 to get a heavy blur and on my 3GS it takes about 5 seconds for a full screen image so that's not going to be a viable solution.
Next step is to look at alternatives ... but thanks for getting me up and running.
vDSP_vfltu8(srcData+1,4,srcAsFloatR,1,pixels);
vDSP_vfltu8(srcData+2,4,srcAsFloatG,1,pixels);
vDSP_vfltu8(srcData+3,4,srcAsFloatB,1,pixels);
// Now apply the filter to each of the components. For a gaussian blur with a 16x16 kernel
// this turns out to be really slow!
vDSP_imgfir (srcAsFloatR, height, width, kernel,resultAsFloatR, frows, fcols);
vDSP_imgfir (srcAsFloatG, height, width, kernel,resultAsFloatG, frows, fcols);
vDSP_imgfir (srcAsFloatB, height, width, kernel,resultAsFloatB, frows, fcols);
// Now re-multiplex the final image from the processed float data
vDSP_vfixu8(resultAsFloatR, 1, finalData+1, 4, pixels);
vDSP_vfixu8(resultAsFloatG, 1, finalData+2, 4, pixels);
vDSP_vfixu8(resultAsFloatB, 1, finalData+3, 4, pixels);
For future reference if you're considering implementing this DON'T: I've done it for you!
see:
https://github.com/gdawg/uiimage-dsp
for a UIImage Category which adds Gaussian/Box Blur/Sharpen using vDSP and the Accelerate framework.
Why are you using vDSP to do image filtering? Try vImageConvolve_ARGB8888(). vImage is the image processing component of Accelerate.framework.

cocos2d/box2d iPhone - Random circular paths

I am experimenting with some new ideas in Cocos2D/Box2D on iPhone.
I want to animate a small swarm of fireflies moving on circular (random?) paths... the idea is that the user can capture a firefly with a net..
I have considered using gravity simulations for this but I believe it is over complicating things... my previous experience with using Bezier curves tells me that this isn't the solution either..
Does anyone have any bright insights for me?
Thanks so much.
Do you need the fireflies to collide with each other?
I ask, as if this isn't a requirement, Box2D is probably overkill for your needs. Cocos2d is an excellent choice for this by the sounds of it, but I think you'd be better off looking into flocking algorithms like boids
Even that may be overly complicated. Mix a few sin and cosine terms together with some random scaling factors will likely be enough.
You could have one sin/cosine combination forming an ellipse nearly the size of the screen:
x = halfScreenWidth + cos (t) * halfScreenWidth * randomFactor;
y = halfScreenHeight + sin (t) * halfScreenHeight * randomFactor;
where randomFactor would be something in the realm of 0.6 to 0.9
This will give you broad elliptical motion around the screen, then you could add a smaller sin/cos factor to make them swirl around the point on that ellipse.
By multiplying your time delta (t) by different values (negative and positive) the path of the curve will move in a less geometric way. For example, if you use
x = halfScreenWidth + cos (2*t) * halfScreenWidth * randomFactor;
the ellipse will turn into a figure 8. (i think!)
Hope this helps get you started. Good luck.
One place to look for ideas would be in the domain of artificial life. They have been simulating swarms of entities for a long time. Here is a link for some simple swarm code written in Java that should give you some ideas.
http://www.aridolan.com/ofiles/Download.aspx

How do I map a texture to the sides of an icosahedron?

I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)

How can I get all points in CGPath curve or quad curve

I have made a quad curve path using the method CGPathAddQuadCurveToPoint. I got the path perfectly. But, I want to know all the coordinate points which are participated in the path.
Is there a way to retrieve all the coordinate points in a path?
If not do u have any other solution for retrieving all the points in a curve mathematically.
Thanks in advance,
Vamshi
You can do this using the wykobi C++ library routine for cubic bezier curves. Wykobi's library supports quadratic Bezier curves also.
Of course as someone pointed out you don't want all the points (although not impossible, it would just take infinite time :). Wykobi makes it easy to get a certain number of points -- if your start, c1, c2, and end points (where c1, c2 are the control points) are exactly the same as the ones given to CGContextAddCurveToPoint then the points will lie perfectly on the line drawn by core graphics -- so you can do things like draw a pattern at several points on the path.
See: http://www.codeproject.com/Articles/22568/Computational-Geometry-C-and-Wykobi
Also, after I started using wykobi I heard that there is a similar, maybe even better library that is part of Boost, but have not checked it out yet.
I created a C++ Class WPoint as a bridge between wykobi points and CGPoints (C++ fun!). Here's some code (without WPoint, but you can imagine that it is exactly the same layout as a CGPoint so if you do the right cast you can convert easily.
NSMutableArray* result = [[NSMutableArray alloc] init];
wykobi::cubic_bezier<CGFloat,2> bezier;
bezier[0] = (WPoint)p1; // start point, in CG we did a CGMoveToPoint
bezier[1] = (WPoint)b1i; // control 1
bezier[2] = (WPoint)b2i; // control 2
bezier[3] = (WPoint)p2; // end point
std::vector<WPoint> point_list;
int numPoints = p2.dist(p3) * pointDensity;
// *** here's the magic ***
wykobi::generate_bezier(bezier,std::back_inserter(point_list), numPoints);
for (int i=0; i<numPoints; i++) {
CGPoint p = (CGPoint)(point_list[i]);
[result addObject:[NSValue valueWithCGPoint:p]];
}
// result has your points!
Here's a link to the Boost geometry library:
http://www.boost.org/doc/libs/1_47_0/libs/geometry/doc/html/geometry/introduction.html
Use CGContextSetLineDash
The purpose of this function is to create a dashed line, but You can use it to get smaller segments.
starting point of each segment can be treated as points.
CGSize bbSize = CGPathGetBoundingBox(path).size;
UIGraphicsBeginImageContext(bbSize);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx, 1.0);
CGContextAddPath(ctx, path);
CGContextSetLineDash(ctx, phase, lengths, count);
CGContextReplacePathWithStrokedPath(ctx);
result = CGContextCopyPath(ctx);
UIGraphicsEndImageContext();
If you want to work on the moveto, lineto, and curveto elements of the path, use CGPathApply. You pass this a pointer to a function in your program, and it calls that function once per element of the path.
Unfortunately, there's no way to just ask for each element like there is with AppKit's NSBezierPath. The function is the only way.
If you want to determine every pixel intersected by the path, too bad—that's not practical, and I can't even think of why you'd want that information. Some contexts, such as PDF contexts, don't even have pixels; in those cases, any question involving pixels is a non sequitur.
A quadratic curve is just that -- a curve. It's impossible to get a list of all of the points on it because there are infinitely many points, and it's not a simple line segment.
See Getting Information about Quartz Paths for a list of the functions you can use to query a CGPath object. Unfortunately, it seems like the most useful information you're going to get is with CGPathContainsPoint(), which only tells you if a given point is contained within the area of the path.
If not do u have any other solution for retrieving all the points in a curve mathematically.
What do you need them for, i.e. what problem are you trying to solve? If it is to intersect two curves, you can do this mathematically. Just set the two curve equations equal to each other and solve for the unknown.
I guess you're after something equivalent to the Java2D FlatteningPathIterator class. For example Java2D's path.getPathIterator(null, 1.0) returns an iterator of only 'lineTo' segments even if the original path had curveTo and quadTo, the double argument controls the 'flatness', giving you an easy way to calculate any point on the curve.
I'm searching for the same thing in Cocoa, but have found nothing. If you find a solution please let me know.
There are curve implmentations around (e.g. http://sourceforge.net/projects/curves/) that could be ported, but there's always a risk that if you don't use the same algorithm as Cocoa then there could be errors between your interpolation and the stroked NSBezierPath/CGPath.