Place object in GPS coordinates ar foundation unity - unity3d

I want to instantiate an object in some specific coordinates in ARFundation,
I have tried several ways but none convinces me or gives me the result that I want,
One is with a radius with which if you pass that radius you activate an object that is in your position and it already leaves it fixed but it does not convince me because you have to reach the exact point first to be able to visualize it and you already lose accuracy with the radius unless you put a radius of 2cm.
The other, I got from this blog https://blog.anarks2.com/Geolocated-AR-In-Unity-ARFoundation/ but depending on which cardinal point you are looking at, it is instantiated in one place or another (I think I remember that it says that is unfinished)
Does anyone have a proven way since it works and has enough accuracy?

Apple has released something called ARGeoAnchor.
I think it is not yet supported on Unity though.
If you wish, you can use it, with RealityKit been your engine.
https://developer.apple.com/documentation/arkit/argeoanchor
Also, note it is only supported in distinct locations in the US.
From what I could tell so far, it is very accurate, with the problem only been the altitude you wish to put things on.

Related

Building system like clash of clans / boom beach

Is anyone knows how to do a building system like coc / boom beach? I know how to do a fortnite building system but there's only 1x1 structures to do while i need 3x2, 5x3 and many more sizes to go. I'm going to do it using UE4 with Blueprints. I've been looking so long and couldn't find answer. Hope you'll help me!
Thanks.
As the question is rather vague and doesn't give anyone much to go on. I'll try to take a stab at it, conceptually at least.
I'd assume that you have a base building BP or struct so, in there I would create a Vector2D variable or something similar to give it a length and width per building.
Then when you are trying to spawn the building, check the tiles that are that length and width away from the center of the screen/cursor for any existing buildings. Then when you spawn the building make sure that the building takes claim over the tiles that it is using so others will not be able to overlap later on.
So your main "meat and potatoes" of this project will be creating a grid system and also creating a system that can check whether or not a tile is inhabited already and also using and releasing tiles when needed.
If you want someone to give you a more concrete answer, you will need to show what you have done and tried in your question. Especially for one as broad as this one.

Measuring distance with iPhone camera

How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.

iPhone: GPS on custom map + CATiledLayer

Really hope someone can help me as I'm a bit stuck :S
I have a custom map of an event using the CATiledLayer so users can zoom in and scroll around the map. What I would like to do now is add the functionality to let the user know where they currently are on the map. I know it can be done as I've seen an app do this before. I'm not sure how to go about doing it though, maybe I need to convert lat/lon into pixels but I'm not sure if thats possible (depending on how big the image is, etc).
On another site it was mentioned to find out the boundaries of the map and then I can add pins to the map, but I'm not sure how to go about doing this? Will I need to find every coordinate (lat/lon) within the boundary so I can add the pin of where the user is currently?
If anyone can give me with any advice or pointers, I'd much appreciate it
You can use the route-me library by adding your own map source class. A good article that explains how to do it is here http://mobilegeo.wordpress.com/2010/07/07/route-me-native-iphone-mapping-framework/
I'm facing a challenge right now in trying to map GPS coords to a map that's an artist's rendition. In particular this is for a ski mountain, so the artist's rendition is a "trail map". The trail map is not accurate in that the whole mountain has been squeezed onto the one view, yet the actual topology of the mountain doesn't conform to the drawing.
I've tried several approaches:
1) Triangulation using known GPS coordinates of the lift stations. This is fairly simple to implement, yet this is not accurate enough and the algorithm fails if the rendition differs enough from the GPS map.
2) Creating a uniform grid for both the GPS map and the Trailmap, then doing a mapping from cells in the GPS map to the Trailmap. The downside to this is it can be a lot of busy work with no easy UI for doing it.
3) Calculating the vectors of each lift (being a straight line), find the closet lift station to a given GPS point, and calculate the estimated Trailmap location using this vector.
I'm considering #2, which is essentially the simplest solution. But if you've found a better way, I'd love to hear it.

iPhone - At user event create objects in the view

I am new to iPhone programming, so I think part of the problem is that I don't know what I really want to google to find my answer. I am looking for a method that allows a user to draw a line on the screen. There is no guarantee that it will be straight, it can be curved or whatever. I was thinking that I could create some small square image, and then as they draw, place them into a NSset. But I am not really sure how to communicate each new object up to the view. Up to this point, I've just been messing around with objects I put on the view and then assign movement to those, this is my first jump into on-the-fly object creation.
It might be that I just need to jump into a class/object type or even a tutorial, any guidance would be great.
Thanks!
Are you asking how to create a 'paint' type application? There's an apple example for that:
http://developer.apple.com/iphone/library/samplecode/GLPaint/Introduction/Intro.html
This question is relevant, but might be too complex when you're just starting out:
Improving Finger Painting Performance
If you're a bit more specific about what problem your app is to solve you might get some more specific answers.

How does one interact with OBJ-based 3D models on iPhone?

I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html