i want to make fruit ninja blade. i am using cocos2d and the MotionStreak is really ugly for this. Any other approach or better settings for MotionStreak? maybe particle system? any free great tools similar to ParticleDesigner?
I have my own implementation with OpenGL triangle strips mapped with texture. The blade is very smooth if the distances between adjacent points are small enough. I use linear interpolation to insert more points between two points which the distance is greater than a predefined constant. I'm thinking of use order 2 interpolation but the implementation is more difficult and the performance may reduces.
Source code is available here https://github.com/hiepnd/CCBlade
i don't know how much effort it will take but the thing is you can create and change shape of filter and just apply a white to gray gradient as it's texture, it'll give a very good looking results. i myself am working with cocos2d-x (it's just a c++ port of cocos2d) and it has samples for dynamic filters (it's just like you create and manipulate a mesh and all the things are done automaticaly), it uses CCActionGrid class but i just didn't used this class yet if you couln't solve your problem using that ask me to search deeper.
http://pixlatedstudios.com/2012/02/fruit-ninja-like-blade-effect/
Worth Checking out!!!! based on hiepnd CCBlade tutorial.
Related
A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.
I was wondering if someone can provide me a guideline to detect if a person in a picture is bald or not, or even better, how much hair s\he has.
So far I tried to detect the face and the eyes position. From that information, I roughly estimate the forehead and bald area by cutting the area above the eyes as high as some portion of the face.
Then I extract HOG features and train the system with bald and not-bald images using SVM.
Now when I'm looking at the test results, I see some pictures classified as bald but some of them actually have blonde hair or long forehead that hair is not visible after the cutting process. I'm using MATLAB for these operations.
So I know the method seems to be a bit naive, but can you suggest a way of finding out the bald area or extracting the hair, if exists. What method would be the most appropriate for that kind of problem?
very general, so answer is general unless further info provided
Use Computer Vision (e.g MATLAB Computer Vision toolkit) to detect face/head
head has analogies (for human faces), using these one can get the area of the head where hair or baldness is (it seems you already have these)
Calculate the (probabilistic color space model) range where the skin of the person lies (most peorple have similar skin collor space range)
Calculate percentage of skin versus other color (meaning hair) in that area
You have it!
To estimate a skin color model check following papers:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.8637&rep=rep1&type=pdf
http://infoscience.epfl.ch/record/135966
http://www.eurasip.org/Proceedings/Eusipco/Eusipco2010/Contents/papers/1569293757.pdf
Link
If an area does not fit well with skin model it can be taken as non-skin (meaning hair, assuming no hats etc are present in samples)
Head region is very small, hence, using HOG for classification doesn't make much sense.
You can use prior information - like detect faces; baldness/hair is certain to be found on the area above the face. Also, use some denser feature descriptors.
You are probably ending up with very sparse representation or equivalently less information because of which your classifier is not able to classify correctly.
I am interested in using the CGContextEOFillPath feature provided by apple. I am guessing with the way the EOFill works, it probably has a way to take the filled in areas and calculate an area.
So my question is does anyone know of a way to use CGContextEOFillPath and find the area of the filled in sections.
If this isn't something that is easily done, maybe some pointers to a better way of doing this would be helpful. Though I need to use the EO style graphing.
Thanks.
What do you mean "Calculate the area"?
As in calculate the surface area of a complex shape?
It depends on your shapes.
Are they all polygons?
What about circles?
There are well known formulas for calculating the area of a polygon. (Wikipedia has it) Part of that calculation involves using an ABS() function because shapes drawn "counterclockwise" have the opposite sign as those drawn "clockwise". If you're looking to simulate the EO behavior, you can simply ignore the sign change, because, for you, it's desirable.
If you have more complicated shapes that involve curves, then you need to break the problem down into multiple parts - one part to solve for polygons - one to solve for circles - one to solve for other shapes, etc.
I want to ask about jelly physics ( http://www.youtube.com/watch?v=I74rJFB_W1k ), where I can find some good place to start making things like that ? I want to make simulation of cars crash and I want use this jelly physics, but I can't find a lot about them. I don't want use existing physics engine, I want write my own :)
Something like what you see in the video you linked to could be accomplished with a mass-spring system. However, as you vary the number of masses and springs, keeping your spring constants the same, you will get wildly varying results. In short, mass-spring systems are not good approximations of a continuum of matter.
Typically, these sorts of animations are created using what is called the Finite Element Method (FEM). The FEM does converge to a continuum, which is nice. And although it does require a bit more know-how than a mass-spring system, it really isn't too bad. The basic idea, derived from the study of continuum mechanics, can be put this way:
Break the volume of your object up into many small pieces (elements), usually tetrahedra. Let's call the entire collection of these elements the mesh. You'll actually want to make two copies of this mesh. Label one the "rest" mesh, and the other the "world" mesh. I'll tell you why next.
For each tetrahedron in your world mesh, measure how deformed it is relative to its corresponding rest tetrahedron. The measure of how deformed it is is called "strain". This is typically accomplished by first measuring what is known as the deformation gradient (often denoted F). There are several good papers that describe how to do this. Once you have F, one very typical way to define the strain (e) is:
e = 1/2(F^T * F) - I. This is known as Green's strain. It is invariant to rotations, which makes it very convenient.
Using the properties of the material you are trying to simulate (gelatin, rubber, steel, etc.), and using the strain you measured in the step above, derive the "stress" of each tetrahdron.
For each tetrahedron, visit each node (vertex, corner, point (these all mean the same thing)) and average the area-weighted normal vectors (in the rest shape) of the three triangular faces that share that node. Multiply the tetrahedron's stress by that averaged vector, and there's the elastic force acting on that node due to the stress of that tetrahedron. Of course, each node could potentially belong to multiple tetrahedra, so you'll want to be able to sum up these forces.
Integrate! There are easy ways to do this, and hard ways. Either way, you'll want to loop over every node in your world mesh and divide its forces by its mass to determine its acceleration. The easy way to proceed from here is to:
Multiply its acceleration by some small time value dt. This gives you a change in velocity, dv.
Add dv to the node's current velocity to get a new total velocity.
Multiply that velocity by dt to get a change in position, dx.
Add dx to the node's current position to get a new position.
This approach is known as explicit forward Euler integration. You will have to use very small values of dt to get it to work without blowing up, but it is so easy to implement that it works well as a starting point.
Repeat steps 2 through 5 for as long as you want.
I've left out a lot of details and fancy extras, but hopefully you can infer a lot of what I've left out. Here is a link to some instructions I used the first time I did this. The webpage contains some useful pseudocode, as well as links to some relevant material.
http://sealab.cs.utah.edu/Courses/CS6967-F08/Project-2/
The following link is also very useful:
http://sealab.cs.utah.edu/Courses/CS6967-F08/FE-notes.pdf
This is a really fun topic, and I wish you the best of luck! If you get stuck, just drop me a comment.
That rolling jelly cube video was made with Blender, which uses the Bullet physics engine for soft body simulation. The bullet documentation in general is very sparse and for soft body dynamics almost nonexistent. You're best bet would be to read the source code.
Then write your own version ;)
Here is a page with some pretty good tutorials on it. The one you are looking for is probably in the (inverse) Kinematics and Mass & Spring Models sections.
Hint: A jelly can be seen as a 3 dimensional cloth ;-)
Also, try having a look at the search results for spring pressure soft body model - they might get you going in the right direction :-)
See this guy's page Maciej Matyka, topic of soft body
Unfortunately 2d only but might be something to start with is JellyPhysics and JellyCar
can someone point me to a paper/algorithm/resource/whatever that tells me how to implement a texture minification filter (applies when texels are smaller than pixels) in a raytracer?
thanks!
Since you are using ray tracing I suspect you are looking for a high quality filtering that changes sampling dynamically based on the amount of "error". Based on this assumption I would say take a look at "ray differentials". There's a nice paper on this here: http://graphics.stanford.edu/papers/trd/ and it takes effects like refraction and reflection into account.
Your answer to yourself sounds like the right approach, but since others may stumble across the page I'll add a resource link as requested. In addition to discussing mipmapping (ripmapping is basically more advanced mipmapping), they discuss the effects of reflection and refraction on derivatives and mip-level selection.
Homan Igehy. "Tracing Ray Differentials." 1999. Proceedings of SIGGRAPH. http://graphics.stanford.edu/papers/trd/
Upon closer reading I see that Rehno Lindeque mentioned this paper. At first didn't realize that it was the right reference because he says that the method samples dynamically based on the error of the sampling, which is incorrect. Filtering is done based on the size of the pixel's footprint and uses only one ray, just as you described.
Edit:
Another reference that might be useful ( http://www.cs.unc.edu/~awilson/class/238/#challenges ). Scroll to the section "Derivatives of Texture Coordinates." He suggests backward mapping of texture derivatives from the surface to the screen. I think this would be incorrect for reflected and refracted rays, but is possibly easier to implement and should be okay for primary rays.
I think you mean mipmap'ing.
Here is an article talking about using them.
But nether say how to chose which mipmap to use, but they are often blended (the bigger and smaller mipmap).
Here's a one more article about how Google Earth works, and it talks about how they mipmapping the earth.
thank you guys for your answers, but since I didn't find any appropriate techinque i created something myself which turned out to work very well:
i assume my ray to be a cone with a coneradius of half a pixel on the imageplane. when the ray hits a surface, i calculate the ellipse which is projected onto the surface (the ellipse from the plane-cone intersection). Then, using the texturecoordinate derivatives at the intersection point, i project this ellipse into texturespace. now i know which part of the texture lies under my pixel and can subsample this area
I Also use RipMaps to improve the quality - and i chose the RipMap level based on the size of the ellipse in Texturespace