I am looking for a technical answer to the question :
What is difference between “real-world object” , surface, AR anchors in ARkit?
I believe and as for as I can tell:
1) ARkit offers 3 different methods to search for “real-world objects” , surfaces , AR Anchors.
ARSCNView hitTest(_:types:)
https://developer.apple.com/documentation/arkit/arscnview/2875544-hittest
ARSKView hitTest(_:types:)
https://developer.apple.com/documentation/arkit/arskview/2875733-hittest
ARFrame hitTest(_:types:)
https://developer.apple.com/documentation/arkit/arframe/2875718-hittest
I understand to look for SceneKit /SpriteKit content displayed in the view you need to use different hitTest methods.
I just can’t understand what is “real-world object” is vs surface vs AR anchor?
My best guess is:
Real-world object:
- I don’t know?
Surface:
- featurePoints
- estimatedHorizontalPlane
-estimatedVerticalPlane
AR anchors:
ARImageAnchor
ARFaceAnchor
ARPlaneAnchor
I think you get the idea.... what is a “real-world object” in ARKit?
Any help would be great. The documentation seem to really emphasize the difference between “real-works object or surface”.
Thank you
Smartdog
We all learn by sharing what we know
My intuition is that whoever wrote Apple’s docs kept things ambiguous because a) you can use those methods for multiple kinds of hit tests, and b) ARKit doesn’t really know what it’s looking at.
If you do a hit test for any of the plane-related types (existingPlane, estimatedHorizontalPlane, etc), you’re looking for real-world flat surfaces. Or rather, you’re looking for things that ARKit “thinks” look like flat horizontal (or vertical, in iOS 11.3 and later) surfaces. Those might or might not accurately reflect the shape of the real world at any given moment, but they’re ARKit’s best guess. Which of the plane-related types you search for determines whether you get an existing ARAnchor.
(Note false negatives are more common than false positives. For example, you might find no hit result at a point where a corner of tabletop hasn’t been mapped by ARKit, but you’re unlikely to find a plane hit result with no corresponding real-world flat surface.)
If you do a featurePoint hit test, you’re testing against ARKit’s sparse map of the user’s environment. If you turn on the showFeaturePoints option in ARSCNView you can see this map — in each frame of video, it finds tens to small-hundreds of points that are visually interesting enough (well, “interesting” from a particular algorithm’s point of view) that it can correlate their 2D positions between frames and use parallax differences to estimate their distances from the camera and positions in 3D space. (In turn, that informs ARKit’s idea of where the device itself is in 3D space.)
Because a “feature point” can be any small, high-contrast region in the camera image, it doesn’t really correlate to any specific kind of real-world thing. If you’re looking at a desk with a nice wood-grain pattern, you’ll see a lot of feature points along the plane of the desktop. If you’re looking at a desk with, say, a potted plant on it, you’ll see some points on the desktop and some on the pot and some on the leaves... not enough points that you (or a CV algorithm) can really intuit the shape of the plant. But enough that, if the user were to tap on one of those points, your app could put some 3D object there and it might convincingly appear to stick to the plant.
So, in the most general sense, ARKit hit testing is looking for “objects” of some sort in the “real world” (as perceived by ARKit), but unless you’re looking for planes (surfaces) one can’t really be more specific about what the “objects” might be.
Related
Lets say I create many different 3D objects (Arms, legs, body, head). How would I go about making it so all of them can connect to form a body. And after the body is formed make it so they all animate in the same way. Can anyone help me in the right direction with tutorials or links on these topics?
Here are some good tutorials:
https://www.youtube.com/watch?v=q3oVAXhvQl0
https://www.youtube.com/watch?v=Ch1aJljEYp4
https://medium.com/#larissaredeker/3d-character-customization-fd95a1d57ae
And here a reddit user's answer(Named DankP3):
For body sliders you will probably want to look at blendshapes, this is done in the modelling programme. The more blendshapes you have the more complex it will get. eg. 'normal' 'thin' and 'muscular' bodies at one scale of variation and thicker eye brows at the other.... Of note though, facial morphs don't affect most clothes!
I think one of the most challenging parts will be that using the same animation on a thin character and a muscle bound character will not look good so some significant animation work may be required.
I'd probably apply hairstyle in the same way as clothes (see below), but colour can be manged for hair easily through code, swapping base colours, or textures.
For clothes, you rig to the same skeleton as the character (and you will need the same blendshapes to fit the body or if your body is to change shape). A lot of people put all clothes in the same model and turn them on and off as required (this intuitively seems messy), but looping through the bone array on the meshrenderer is not difficult and contrary to what the other reply says you don't need to match bones by angle and position. if you rig correctly, you can just swap, rather than match, the bones in the clothes mesh renderer for those on the character model when the clothes are put on and you are all done.
Again, all credit goes to DankP3
You can see his answer HERE.
Is there a tech community agreed term for a photographic (well as close as possible) scene that can be explored by walking around? Obviously, within certain limits. Say, a museum could scan a sculpture with laser and make it available on vr, 3d mesh with properly mapped textures. Is there a name for such thing? The so-called 360 VR photos definitely fall short of such detail.
I think the most common names are:
360 if it's just an image from one point containing all the angles, usually a equirectangular or cubemap texture/video. Some have stereoscopy, but it's very limited.
360 with depth it's a 360 but apart from color, it has depth information. This allows stereoscopy and some movement, but because of shadowing and problems with acquiring depth maps its almost never used. In the future AI-based filling of shadowed areas, and perhaps replacing the need for capturing depth, might make this a commonly used format.
photogrammetry if it's converted to a textured mesh, has proper depth and can be viewed from all angles (for example Vanishing of Ethan Carter - unfortunatelly 3d models from that article seem to be missing, sent them an email, maybe they'll fix it)
lightfield if it's a volume containing lots of 360 images with some kind of interpolation between them. Has proper depth but can be viewed only from the mapped volume (see Welcome To Lightfields)
I am working on a application where, I would like to make a 3D terrain model of my country in unity3D in clusive of hills, mountains and rivers. So far I've been able to use mapbox to import the country map because unity wrld sdk doesn't yet support my country.
However the end goal is to create an application capable of representing natural disasters. Example, I have the country. I would like to know how would one go about causing rain to occurred that would essentially affect the "water levels" of the river and essentially show a flood. Basically, after I bring in the terrain how do I "act" on it to cause a landslide.
Any help or tutorial pointing to such would be welcomed.
You will need different models for each natural disaster. You will always only get a rough estimate of what may happen as your data will never represent the actual terrain. (For example earthquake, you may be able to reproduce damage to structures but never be able to predict if there will be a drift in the earth itself)
Rain/ Flood
A really simple simulation of rising ground water is slowly moving a "water" plane up. This crude approach will demonstrate which areas are going to be under water quite easily. For a detailed flood simulation you will need a fluid simulation of any kind (there are quite a few on the asset store)
Avalanche
Treat it as a fluid system with a strong resistance.
Vulcan
Almost the same as a flood, just with more viscosity.
Earthquake
You may be able to simulate the damage of an earthquake if all your objects have some kind of break point and the earthquake is added force to an area. A set force has an certain chance to destroy the object in the area. (Think of it in terms like any castle destroy game aka Flappy Bird, the bullet is your local earthquake and the castle your terrain + building/ trees)
Fire
You will need something like a burn value. Higher value = the longer it burns, harder to put out, faster spread. If a fire starts at any given point, it grows around. A river would have a value of 0, same as mountains. A forest would have a high value, a grass plain a low value. If you want to simulate a hot dry summer, your terrain could add a fixed value to everything, grass gets drying and thus has a higher chance to spread fire.
A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.
I'm hoping to prototype some very basic physics/statics simulations for "voxel-based" games like Minecraft and Dwarf Fortress, so that the game can detect when a player has constructed a structure that should not be able to stand up on its own.. Obviously this is a very fuzzy definition -- whether a structure is impossible depends upon multitude of material and environmental properties -- but the general idea is to motivate players to build structures that resemble the buildings we see in the real world. I'll describe what I mean in a bit more detail below, but I generally want to know if anyone could suggest either an potential approach to the problem or a resource that I could use.
Here's some examples of buildings that could be impossible if the material was not strong enough.
Here's some example situations. My understanding of this subject is not great but bear with me.
If this structure were to be made of concrete with dimensions of, say, 4m by 200m, it would probably not be able to stand up. Because the center of mass is not over its connection to the ground, I think it would either tip over or crack at the base.
The center of gravity of this arch lies between the columns holding it up, but if it was very big and made of a weak, heavy material, it would crumble under its own weight.
This tower has its center of gravity right over its base, but if it is sufficiently tall then it only takes a bit of force for the wind to topple it over.
Now, I expect that a full-scale real-time simulation of these physics isn't really possible... but there's a lot of ways that I could simplify the simulation. For example:
Tests for physics-defying structures could be infrequently and randomly performed, so a bad building doesn't crumble right as soon as it is built, but as much as a few minutes later.
Minecraft and Dwarf Fortress hardly perform rigid- or soft-body physics. For this reason, any piece of a building that is deemed to be physically impossible can simple "pop" into rubble instead of spawning a bunch of accurate physics props.
Have you considered taking an existing 3d environment physics engine and "rounding off" orientations of objects? In the case of your first object (the L-shaped thing), you could run a simulation of a continuous, non-voxelized object of similar shape behind the scenes and then monitor that object for orientation changes. In a simple case, if the object's representation of the continuous hits the ground, the object in the voxelized gameplay world could move its blocks to the ground.
I don't think there is a feasible way to do this. Minecraft has no notion of physical structure. So you will have to look at each block individually to determine if it should fall (there are other considerations but this is minimum). You would therefore need a way to distinguish between ground and "not ground". This is modeling problem first and foremost, not a programming problem (not even simulation design). I think this question is out of scope for SO.
For instance consider the following model, that may give you an indication of the complexities involved:
each block above height = 0 experiences a "down pull" = P, P may be any of the following:
0 if the box is supported by another box
m*g (where m is its mass which depends on material density * voxel volume) otherwise if it is free
F represents some "friction" or "glue" between vertical faces of boxes, it counteracts P.
This friction should have a threshold beyond which it "breaks" and the block then has a net pull downwards.
if m*g < sum F, box stays where it is. Otherwise, box falls.
F depends on the pairs of materials in contact
for n=2, so you can form a line of blocks between two towers
F is what causes the net pull of a box to be larger than m*g. For instance if you have two blocks a-b-c with c being on d, then a pulls on b, so b should be "heavier" than m*g where it contacts c. If this net is > F, then the pair a-b should fall.
You might be able to simulate the above and get interesting results, but you will find it really challenging to handle the case where there are two towsers with a line of blocks between them: the towers are coupled together by line of blocks, there is no longer a "tip" to the line of blocks. At this stage you might as well get out your physics books to create a system of boxes and springs and come up with equations that you might be able to solve numerically, but in a full 3D system you will have a 3D mesh of springs to navigate iteratively to converge to force values on each box and determine which ones move.
A professor of mine suggested that I look at this paper.
Additionally, I found the keyword for what it is I'm looking for. "Structural Analysis." I bought a textbook and I have a long road ahead of me.