My code looks for a QR Code in the frame received during the session(didUpdate) ARSCNViewDelegate method. I check to see if all four corners and the center of the QR Code are in the same plane with hitTest, and then drop an ARAnchor at the center. I create a SCNReferenceNode for the anchor with a reference to a scenekit model of a fairly large house (70'w x 30'd x 30'h) I position the house 30 meters in front (z =-30) and 30 meters to the right (x=30) of the detected QR Code, and it initially appears OK. However, if I try to "walk around" the model, it moves with me, always maintaining a constant distance and offset from my iPad camera. I have tried using my own anchors, the plane anchors created by ARKit, and lots of other ideas, nothing changes. How can I get it to stay put, like the plane model does in the boilerplate ARKit xcode project?
It sounds like although you created some new anchors, that you perhaps didn't assign your model to them? So when your model gets loaded and presented, it's being 'tracked' on the gyro. So you get that Pokemon Go effect where regardless of what you do the AR model doesn't change in size.
Related
I'm trying to use ARkit to create a simple demo where I would need to scan an image and display a 3D model of the house at the exact location defined by the image. This means that the picture would be the starting point 0 for me, and from this point I would show the house to which I could go in augmented reality. Is there any way to do this?
I came up with the method using image tracking, but I can't fix the position of the scanned image, the model always disappears when the mobile phone does not see the image.
My question is.
Is it possible to fix the position of the object after scanning the image, even if the phone will no longer point at it and can I walk 20 meters further?
Thanks for any help, I'm new to ARkit features.
I have a similar issue.
I recommend you use the method of cloning an object to the same location with 'Instantiate' after image tracking.
Or, I think it would be okay to utilize AnchorContent(position, prefab) by Anchor Manager.
Anchor Content Method
Then, Even if you don't see the image, the model won't disappear
My model doesn't disappear after image tracking, but when I walk 20 meters more,
The problem is that the position of the object changes slightly.
After solving your current problem, I would appreciate it if you could let me know if the position of the object is fixed well.
I am implementing a "companion map" for a HoloLens application using Unity and Visual Studio. My vision is for a small rectangular map to be affixed to the bottom right of the HoloLens view, and to follow the HoloLens user as they move about, much like the display of a video game.
At the moment my "map" is a .jpeg made into a material and put on an upright plane. Is there a way for me to affix the plane such that it is always in the bottom right of the user's view, as opposed to being fixed in the 3D space that the user moves through?
The Orbital Solver in MRTK can implement this idea without even writing any code. It can lock the map to a specified position and offset it from the player.
To use it what you need to do is:
Add Orbital Script Component to your companion map.
Modify the Local Offset and World Offset properties to keep the map in the bottom right of the user's view.
Modify the Orientation Type as Face Tracked Object.
Besides, the SolverExamples scene provided by the mrtkv2 SDK is an excellent outset to become familiar with Solver components
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.
First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.
How does one create the game "area" for a scroller game?
How does one then put various obstacles with collision detection along this scrolled environment.
I want to try out a project which will allow the user to scroll to a certain direction in order to progress through the game.
How does one map the objects within the environment and then move what I guess is the "camera", the view of the environment.
Thanks
The trick is that there is no "area". The only bits that exist are what's under the camera (the view you currently see) and a small surrounding area giving you time to prepare more world in the direction you are moving..
Your world coordinates need to be defined as do the starting coordinates for the view. You use tiles to create the view - at its simplest that is 9 tiles, one you are currently "on" and one in each direction. If you look at the keyboard numberpad you are "on" the 5. If you move a little to the top right you are displaying parts of tiles 8, 9, 5 & 6. At that point you would create new tiles in case you move further. As you leave tile 5 you would probably release tiles 4, 1 & 2. Nine tiles may not be the optimal number of course.
If doing this with UIViews (probably not the high-performance choice) you are probably going to define one big view that can handle all the tiles and tile them onto the view (add and remove subviews), setting the large view's frame to define your camera position. As you move you change the frame to move your camera, when you need to shuffle tiles you move both the tiles and the frame to recenter giving room to move further within the coordinates of your view.
Collision detection is pretty simple since you define your own dimensions (the thing representing "you" in this world) and objects in your view have dimensions you can check against. CGRectIntersectsRect might be the simplest function to use but if you have irregularly-sized views it will get more complicated.
This answer about implementing a cyclic UIScrollView is a similar idea but it only handles scrolling on one direction.
This is a pretty common topic and if you google you will find a lot of sample code and tutorials around.
From the game logic side:
All your objects (lets call them gameobjects) should have a coordinate (x and y position) in your game world. You will keep all your gameobjects in a list. Your player object will be a gameobject too. Usually your "camera" position will be relative to your player objects position. I.e. the player will always be in the center of the screen. To determine the current "screen" position of your objects you will just subtract the camera position from your objects "world" position. Collision is usually made with simple rectangular overlap checks. You give all your objects a width and a height attribute and do your collision checks using x, y, width and height.
From the display side:
If you want to display many objects (i.e. Player, Enemies, Obstacles and so on) the best way to implement something like this is to use an OpenGL View. In this view you can display all Objects as Textures that are mapped to Polygons. You can use a library such as cocos2d which already has all of the code to achieve this easily.