I'm trying to implement an idea and I'm having a look at ArView of the Apple's pARK sample ( http://developer.apple.com/library/ios/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083 ).
Instead of a single Point (coordinate referenced), i would like to draw a polygon to the ground. When the user points the device camera to the area where the polygon's coordinates have been set, the polygon will appear on the device screen.
As totally new with Augmented reality concept and with objective c, can someone guide me and point to the right path.
Thanks,
Zenon.
Related
My code looks for a QR Code in the frame received during the session(didUpdate) ARSCNViewDelegate method. I check to see if all four corners and the center of the QR Code are in the same plane with hitTest, and then drop an ARAnchor at the center. I create a SCNReferenceNode for the anchor with a reference to a scenekit model of a fairly large house (70'w x 30'd x 30'h) I position the house 30 meters in front (z =-30) and 30 meters to the right (x=30) of the detected QR Code, and it initially appears OK. However, if I try to "walk around" the model, it moves with me, always maintaining a constant distance and offset from my iPad camera. I have tried using my own anchors, the plane anchors created by ARKit, and lots of other ideas, nothing changes. How can I get it to stay put, like the plane model does in the boilerplate ARKit xcode project?
It sounds like although you created some new anchors, that you perhaps didn't assign your model to them? So when your model gets loaded and presented, it's being 'tracked' on the gyro. So you get that Pokemon Go effect where regardless of what you do the AR model doesn't change in size.
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.
I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.
I am trying to make an mobile application that contains AR(Augumented Reality)-Mode using Unity3D. So I have connected my mobile device with my unity3d program, and the camera works fine. But when move the mobile device, the main camera inside unity program does not move the same orbit that the mobile device moves. Does any one know how to change or control the orbit of the main Camera in unity3d?
This could be happening due to a number of reasons. It could be due to non centered pivots, or coordinate systems for example.
Could you please specify which AR system are you using? As a side note, at work we recently had a project involving Unity3d and Metaio and it was a nightmare to bend the system to do what we needed, specially when we needed to do a lot of object positioning based on the local coordinate system.
When you refer to the orbit of the camera, I imagine it could be that the pivot of the camera is somehow offset and the camera is rotating around that offset. Or maybe that the camera is a child of the actual Game Object that is controlled by the AR system, in which case this parent node acts as a pivot to the camera.
In the picture below you can see that the camera is away from that center point and when it rotates it does it based on that center point, in other words the camera always tries to look at that center point and it gives that feeling of "orbiting" when it moves.
Here's the link to the image (I can't post pictures yet on this forum -.- )
http://i.stack.imgur.com/fIcY2.png
I'm starting study opengl, and im tring to make a 3d chess like, but i cant figureout, how i can know where i have clicked in the "table" to make the proper animations, any advice ?
This is called "3D picking". You have to translate screen coordinates into world coordinates. From there, do a ray/collision object (bounding box?) intersection test. If they intersect, that's where the user clicked.
You'll have to do a little bit more than this to solve the depth-order problem, like finding the first time to intersection of each object, then selecting the one with the lowest (positive) time.
If you google for "3D picking" you might find what you are looking for.
Here is a tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=32
Note that this is not specific to any shape of bounding object, be it a bounding box, a polygon, a curve, etc. You just have to figure out the math for the intersection test for each type of object you want to support.
Edit:
I didn't read that tutorial before I linked it, I just figured NEHE is where all the cool kids learn OpenGL (admittedly ten years ago...).
Here is something from the OpenGL FAQ about picking:
http://www.opengl.org/resources/faq/technical/selection.htm
waldecir, look for a raypick function. It's the name for sending a ray from the scene's camera center through the pixel you clicked on (actually, through that pixel's translated position on the camera's plane representing the "glass surface of the screen" in the 3D world) and return the frontmost polygon the ray hits together with some information. Usually coordinates within the polygon's surface axes, e.g. UV or texture coordinates. By checking the coordinates, you can determine which square the user clicked on.
Rays can be sent from any position and in any direction, so likely you'd have to get the camera position and its plane center, but the documentation should be able to help you there.