ARKit How to hide X Y Z world origin anchor? - arkit

I want to delete those world origin anchor(?) ...I published app but it still appear on screen.

The world origin you are referring to is part of the ARSCNDebugOptions which are:
options for drawing overlay content to aid debugging of AR tracking in
a SceneKit view.
Specifically what you are referring to is the .showWorldOrigin parameter which:
displays a coordinate axis visualization indicating the position and
orientation of the AR world coordinate system.
As such, one way to disable this is as follows:
augmentedRealityView.debugOptions = []
Whereby augmentedRealityView is an ARSCNView.
Hope it helps...

Related

How to fit content into a specified area of screen at game startup in Unity3D

We are using Unity3D to develop an interesting medical application. By making it very briefly, we have a very large touch screen, hanging from a wall. This screen is fixed to the wall and can not be moved. Patients could be adults or children. Tall or short people and so on. Before starting the game, we perform a calibration phase that consists in trying to understand more or less what the touch range is. That is, a taller person can reach the highest points on the screen, while a lower person can not. The calibration phase then identifies more or less which area is reachable. The result of the calibration (simplified) is a rectangle. We would like to fit the content of the game made with Unity3D to be included within this rectangle. That is, there is some function in Unity3D that allows you to specify when you start the game where you "draw" the elements of the game by defining a sort of "sub screen"?
Absolutely yes. It is quite easy, just change the Viewport Rect of the Camera:
Also check the Documentation for completeness (the paragraph Normalized Viewport Rectangles reports an example in games field, where the camera is split for a two-players match...you basically want the same thing but with a single camera).
In this doc, there's also an example in which the viewport is changed programmaticaly (that's your case). Basically:
Camera.main.rect = new Rect(xMin, xMax, yMin, yMax);

How to make a playing area with fixed square dimensions?

I want to have a playing area with a square dimension, and a sidebar GUI that can be resized according to the resolution. I drew a picture that might help explain
I tried following a tutorial here but the actual dimensions in the build I run seem to be different from what I configure it to in my editor. Also, how do I get the coordinates of the middle of the square (to instantiate something)? Any help? Thanks!
In the tutorial you linked it says:
When running your game from within Unity's editor, be sure to have the Game >window open and visible in the editor when you run the game. There's >currently a bug in Unity 3.0 (and possibly in earlier versions as well) where >the window resolution reported to the script does not match the actual >resolution of the window inside the editor if the window isn't visible at the >time the play button is pressed, leading to a viewport with the wrong size.
Did you took notice to this?
Also what do you mean by taking the coordinates in the center of the square? If you mean the actual screen, then you should get a point based on the square dimensions like:
Vector2 point;
Rect rect = camera.rect;
point.x = rect.x/2;
point.y = rect.y/2;
But if you want the point in 3D space where the camera is pointing, you can use the default method ViewportToWorldPoint, as shown here: http://answers.unity3d.com/questions/189731/how-would-i-find-a-point-in-front-of-my-cam.html
It should look like this:
float distanceFromViewPort = 1 //change 1 to the distance you wish the instance to appear away from the camera.
Vector3 point = camera.ViewportToWorldPoint(Vector(0.5,0.5,distanceFromCamera));

Lower Left Coordinates of SKScene

I'm new to Swift SpriteKit programming and the coordinate system is driving me crazy. I create a sprite and I want to move it to the four corners of the screen. So, I set the position to (0,0). That's off the bottom left corner of the screen. Through some manual testing I've developed the chart below. The lower left and upper right are what the iOS simulator report when I touch the screen.
I have 2 questions:
1: Is there a method of determining the coordinates of the lower left hand corner of the view? Maybe I could build a dictionary with the coordinate values and the determine the machine type and then set the offsets. But, that's a lot of work and might not be accurate for new devices. It just seems that there should be a scene or frame property that I can use to put an object at the bottom left of the window.
2: The math doesn't work. In the iPhone5, 300 (lower left x) + 320 (width) = 620, not the reported 727. Same issue is true with the y coordinates. How does this work?
I set as few parameters as possible. I have not changed the anchorPoint or position of the scene.
Device Size LL UR
iPhone4s (320,480) (260,0) (766,764)
iPhone5 (320,568) (300,0) (727,764)
iPhone5s (320,568) (298,0) (727,764)
iPhone6 (375,667) (297,1) (728,765)
iPhone6plus (414,736) (298,0) (728,766)
iPad2 (768,1024) (226,0) (800,768)
iPad Air (768,1024) (224,0) (800,767)
iPad Retina (768,1024) (225,0) (800,768)
Ok, I think I figured this out. Setting scene!.scaleMode=SKSceneScaleMode.ResizeFill allow me to identify the four corners of the screen. So, now I can determine when a sprite crosses the edge of the screen. This doesn't seem to distort my images. I haven't been able to test it on a read device yet, but it leaves a blank area around the iPad2.
Applause for the hard work! haha
If I was going about getting values for the lower coordinates, I would use CGRectGetMinX to get the x-coordinate and CGRectGetMinY to get the the y-coordinate likewise:
CGPoint minimum = CGPointMake(CGRectGetMinX(self.frame),CGRectGetMinY(self.frame));
Then, if you wanted to get the top coordinates just use the same things but say MaxX or MaxY. Yeah, the coordinates are a bit confusing but if you use those then it will be a breeze.
EDIT: If you need to find if a body has exited outside visible space, so far what has worked for me is making a physics body to detect contact with it on the edge
[SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame]
possibly another option you could try is to see the bounds of a UIScreen object.

How to get viewing height or pitch angle from a bing map?

I am using the v7.0 Ajax control from Bing Maps and I'm trying to get the following information while being in the bird's eye view mode on the map:
the viewing height (or altitude) -- this is the zoom level, right?
the pitch angle -- does this always has the same value, no matter of the viewing angle while being in the bird's eye view mode?
Thanks.
There is no such thing as the "altitude" at which a projected map image is created. There is a map scale and resolution (i.e. how many metres each screen pixel corresponds to) which varies according to the zoom level and the location on the earth's surface, but this does not correlate to the view you would get if you were looking down at the earth from x metres above it.
The angle at which Bird's eye imagery is shot varies in different scenes - you can observe this as you pan around the map - the imagery will clearly warp as you move from one scene to the next.

Is my understanding of the functions of compass & GPS correct in AR apps ?

In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:
Camera: provides the 2D view of reality.
GPS: provides the longitude,latitude of the device.
Compass: direction with respect to magnetic north.
Accelerometer: (does it have a role?)
Altimeter: (does it have a role?)
An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.
use the camera to get the field of view.
use the compass to determine the direction the device is oriented in. The direction determines the set of objects that fall into the field view and need to be reflected with AR adorners.
use the GPS to determine the distance between your location and each object. The distance is usually reflected in the size of the AR adorner you show for that object or in the number of details you show.
use the accelerometer to determine the horizon of the view (a 3-way accelerometer sensitive enough to measure the gravity force). The horizon can be combined with the object altitude to position the AR adorners properly vertically.
use the altimeter for added precision of vertical positioning.
if you have a detailed terrain/buildings information, you can also use the altimeter to determine line of visibility to the various objects and clip out (or partially) the AR adorners for partially obscured or invisible objects.
if the AR device is moving, use the accelerometers to determine the speed and do some either throttling of the number of objects downloaded per view or smart pre-fetching of the objects that will come into view to optimize for the speed of view changes.
I will leave the details of calculating all this data from the devices as an exercise to you. :-)