How to calculate image size on SpriteKit coordinate with ArKit 1.5? - sprite-kit

I was read this answer about Recognizing Images at here. But I still don't understand this answer: "if you want to know the size/extent, you can get that from the physicalSize of the detected reference image".
I can create an SCNPlane from property 'physicalSize' on SceneKit like Apple sample, but it is the meter unit in real-world. The unit of SpriteKit coordinate is the pixel, so I can't use it on SpriteKit. When ArKit detects image object on SpriteKit, the properties 'calculateAccumulatedFrame' and 'frame' are zero, so I don't know image size on SpriteKit coordinate. I still don't know how to calculate image size on SpriteKit coordinate?
Sorry, my English is not good.

Related

ARKit large model follows camera instead of staying stationary

My code looks for a QR Code in the frame received during the session(didUpdate) ARSCNViewDelegate method. I check to see if all four corners and the center of the QR Code are in the same plane with hitTest, and then drop an ARAnchor at the center. I create a SCNReferenceNode for the anchor with a reference to a scenekit model of a fairly large house (70'w x 30'd x 30'h) I position the house 30 meters in front (z =-30) and 30 meters to the right (x=30) of the detected QR Code, and it initially appears OK. However, if I try to "walk around" the model, it moves with me, always maintaining a constant distance and offset from my iPad camera. I have tried using my own anchors, the plane anchors created by ARKit, and lots of other ideas, nothing changes. How can I get it to stay put, like the plane model does in the boilerplate ARKit xcode project?
It sounds like although you created some new anchors, that you perhaps didn't assign your model to them? So when your model gets loaded and presented, it's being 'tracked' on the gyro. So you get that Pokemon Go effect where regardless of what you do the AR model doesn't change in size.

SpriteKit - Scene Scale Mode with No Filter?

I am making a game in SpriteKit for macOS using Swift, and due to my pixel art style, my SpriteKit scene has a size of 384 x 216, and I have relied on my scale mode to fix everything in fullscreen.
I realize that my pixel art becomes blurry with this, so I manually set all of my sprites to have nearest neighbor filtering, and that has worked so far.
The problem is, this gets very tedious, and I now have a SKLabelNode that uses a pixel font, and shows up blurry when in fullscreen. SKLabelNodes don't have an option for nearest filtering as far as I am concerned, and I am wondering if it would be possible to instead of changing textures to nearest neighbor, set it so the scene scales with no filtering at all?

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

What would be the logic to make a ball move within concentric circle using accelerometer in cocos2d

I want to know the best approach to make a sprite ball move within concentric circle using accelerometer in cocos2d. Ive used the equation of the circle to see if the sprite lies within the circle or not.
Find next position of your sprite according to accelerometer input, check if it lies within your area, if yes, update sprite's position.

Augmented reality, drawing a polygon using ArView and pARK sample iOS SDK

I'm trying to implement an idea and I'm having a look at ArView of the Apple's pARK sample ( http://developer.apple.com/library/ios/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083 ).
Instead of a single Point (coordinate referenced), i would like to draw a polygon to the ground. When the user points the device camera to the area where the polygon's coordinates have been set, the polygon will appear on the device screen.
As totally new with Augmented reality concept and with objective c, can someone guide me and point to the right path.
Thanks,
Zenon.