3D Model is not looking same as size in ARKit - swift

Problem statement
I am facing a trouble in AR that our model size does not match with real products.This iPhone has the same dimension as real iPhone but it doesn't match with real products.
IPhone Model Size - (5.44 x 2.64 x 0.28 in)
Reference Screenshot
Reference .obj file Download obj

It's highly likely that the scale of the .obj is not correct.
If you load your model into the SceneKit Editor, and click on the 'cube' on the right hand side:
You will see in the transforms section an area which says Bounding Box.
You can check here to see if your model is actually the right size:
In my example my model is an SCNPlane with a width and height of 0.1m (10cm).
If your bounding box is then not correct, you will need to set the scale which can be done using the scale property of an SCNNode e.g:
model.scale = SCNVector3(0.1, 0.1, 0.1)
Whereby:
Each component of the scale vector multiplies the corresponding
dimension of the node’s geometry. The default scale is 1.0 in all
three dimensions. For example, applying a scale of (2.0, 0.5, 2.0) to
a node containing a cube geometry reduces its height and increases its
width and depth.
Hope it helps...

Related

Size of 3D models in AR

I am trying to place large 3D models (SCNNode) in ARSCNView using ARKit.
The approximate size is as follows :
I have been through the following links :
is-there-any-size-limitations-on-3d-files-loaded-using-arkit
load-large-3d-object-scn-file-in-arscnview-aspect-fit-in-to-the-screen-arkit-sw
As per the above link, upvoted answer by alex papa, the model gets placed in scene. But the model seems above ground hanging in air. The 3D object seems floating in air and not placed on detected/tapped horizontal plane using hit test.
The x & z position is right but y seems some meters above the horizontal plane.
I need scale to be 1.0. Without scaling down the 3D model is it possible to place / visualise it right?
Any help or leads will be of help. Please provide valuable inputs!
The scale of ARKit, SceneKit and RealityKit is meters. Hence, your model's size is 99m X 184m X 43m. Solution is simple – you need to take one 100th of the nominal scale:
let scaleFactor: Float = 0.01
node.scale = SCNVector3(scaleFactor, scaleFactor, scaleFactor)
And here you can read about positioning of pivot point.

ARKit – Are the values entered for Reference Image dimensions important?

In ARKit, I am using the height and width of reference images (as entered by me into XCode in the AR Resource Group) to overlay planes of the same size onto matched images. Regardless of whether I enter accurate reference image dimensions, ARKit accurately overlays the plane onto the real world (i.e., the plane correctly covers the matched image in the ARSCNView).
If I understand correctly, estimatedScaleFactor tells me the difference between the true size of the reference image and the values I entered in the Resource Group.
My question is, if ARKit is able to figure the true size of the object shown in the reference image, when/why would I need to worry about entering accurate height and width values.
(My reference images are public art and accurately measuring them is sometimes difficult.)
Does ARKit have to work harder, or are there scenarios where I would stop getting good results without accurate Reference Image measurements?
ADDITIONAL INFO: As a concrete example, if I was matching movie posters, I would take a photo of the poster, load it into an AR Resource Group, and arbitrarily set the width to something like one meter (allowing Xcode to set the other dimension based on the proportions of the image).
Then, when ARKit matches the image, I would put a plane on it in renderer(_:didAdd:for:)
let plane = SCNPlane(width: referenceImage.physicalSize.width,
height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor.planeColor
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
This appears to work as desired--the plane convincingly overlays the matched image--in spite of the fact that the dimensions I entered for the reference image are inaccurate. (And yes, estimatedScaleFactor does give a good approximation of by how much my arbitrary dimensions are off by.)
So, what I am trying to understand is whether this will break down in some scenarios (and when, and what I need to learn to understand why!). If my reference image dimensions are not accurate, will that negatively impact placing planes or other objects onto the node provided by ARKit?
Put another way, if ARKit is correctly understanding the world and reference images without accurate ref image measurements, does that mean I can get away with never entering accurate measurements for ref images?
As official documentation suggests:
The default value of estimatedScaleFactor (a factor between the initial size and the estimated physical size) is 1.0, which means that a version of this image that ARKit recognizes in the physical environment exactly matches its reference image physicalSize.
Otherwise, ARKit automatically corrects the image anchor's transform when estimatedScaleFactor is a value other than 1.0. This adjustment in turn, corrects ARKit's understanding of where the image anchor is located in the physical environment.
var estimatedScaleFactor: CGFloat { get }
For more precise scale of 3D model you need to measure your real-world image and when AR app will be running, ARKit measures its observable reference image. ARImageAnchor stores a value of estimatedScaleFactor property, thus ARKit registers a difference in scale factor, and then it applies the new scale to 3D model and you model becomes bigger or smaller, that estimatedScaleFactor is for.
However, there's an automatic methodology:
To accurately recognize the position and orientation of an image in the AR environment, ARKit must know the image's physical size. You provide this information when creating an AR reference image in your Xcode project's asset catalog, or when programmatically creating an ARReferenceImage.
When you want to recognize different-sized versions of a reference image, you set automaticImageScaleEstimationEnabled to true, and in this case, ARKit disregards physicalSize.
var automaticImageScaleEstimationEnabled: Bool { get set }

Unity, Relative dimensions of gameobjects

I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale.
Then how could I set the overall relative dimensions between the gameobjects?
For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius.
Also, consider Bounds.extents, that's relative to the objects's bounding box.
Again, considering the Sphere, you can obtain the diameter with:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Bounds bounds = mesh.bounds;
float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object.
The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size).
But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1.
You should try to learn some 3D modelling application to create arbitrary objects.
Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling.
Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number.
So typically 1 m = 1 unit.
All things that are 1 unit tall are 1 m tall.
If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall.
A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option.
Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also.
This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.

ARKit project point with previous device position

I'm combining ARKit with a CNN to constantly update ARKit nodes when they drift. So:
Get estimate of node position with ARKit and place a virtual object in the world
Use CNN to get its estimated 2D location of the object
Update node position accordingly (to refine it's location in 3D space)
The problem is that #2 takes 0,3s or so. Therefore I can't use sceneView.unprojectPoint because the point will correspond to a 3D point from the device's world position from #1.
How do I calculate the 3D vector from my old location to the CNN's 2D point?
unprojectPoint is just a matrix-math convenience function similar to those found in many graphics-oriented libraries (like DirectX, old-style OpenGL, Three.js, etc). In SceneKit, it's provided as a method on the view, which means it operates using the model/view/projection matrices and viewport the view currently uses for rendering. However, if you know how that function works, you can implement it yourself.
An Unproject function typically does two things:
Convert viewport coordinates (pixels) to the clip-space coordinate system (-1.0 to 1.0 in all directions).
Reverse the projection transform (assuming some arbitrary Z value in clip space) and the view (camera) transform to get to 3D world-space coordinates.
Given that knowledge, we can build our own function. (Warning: untested.)
func unproject(screenPoint: float3, // see below for Z depth hint discussion
modelView: float4x4,
projection: float4x4,
viewport: CGRect) -> float3 {
// viewport to clip: subtract viewport origin, divide by size,
// scale/offset from 0...1 to -1...1 coordinate space
let clip = (screenPoint - float3(viewport.x, viewport.y, 1.0))
/ float3(viewport.width, viewport.height, 1.0)
* float3(2) - float3(1)
// apply the reverse of the model-view-projection transform
let inversePM = (projection * modelView).inverse
let result = inversePM * float4(clip.x, clip.y, clip.z, 1.0)
return float3(result.x, result.y, result.z) / result.w // perspective divide
}
Now, to use it... The modelView matrix you pass to this function is the inverse of ARCamera.transform, and you can also get projectionMatrix directly from ARCamera. So, if you're grabbing a 2D position at one point in time, grab the camera matrices then, too, so that you can work backward to 3D as of that time.
There's still the issue of that "Z depth hint" I mentioned: when the renderer projects 3D to 2D it loses information (one of those D's, actually). So you have to recover or guess that information when you convert back to 3D — the screenPoint you pass in to the above function is the x and y pixel coordinates, plus a depth value between 0 and 1. Zero is closer to the camera, 1 is farther away. How you make use of that sort of depends on how the rest of your algorithm is designed. (At the very least, you can unproject both Z=0 and Z=1, and you'll get the endpoints of line segment in 3D, with your original point somewhere along that line.)
Of course, whether this can actually be put together with your novel CNN-based approach is another question entirely. But at least you learned some useful 3D graphics math!

Relationship of video coordinates before/after resizing

I have a 720x576 video that was played full screen on a screen with 1280x960 resolution and the relevant eye tracker gaze coordinates data.
I have built a gaze tracking visualization code but the only thing I am not sure about is how to convert my input coordinates to match the original video.
So, does anybody have an idea on what to do?
The native aspect ratio of the video (720/576 = 1.25) does not match the aspect ratio at which it was displayed (1280/960 = 1.33). i.e. the pixels didn't just get scaled in size, but in shape.
So assuming your gaze coordinates were calibrated to match the physical screen (1280 × 960), then you will need to independently scale the x coordinates by 720/1280 = 0.5625 and the y coordinates by 576/960 = 0.6.
Note that this will distort the actual gaze behaviour (horizontal saccades are being scaled by more than vertical ones). Your safest option would actually be to rescale the video to have the same aspect ratio as the screen, and project the gaze coordinates onto that. That way, they won't be distorted, and the slightly skewed movie will match what was actually shown to the subjects.