Is there a way to identify streets and roads on MapBox maps and get information about their shapes, etc?
I am talking about something like this: http://www.playmapscube.com/ The ball follows the paths created by streets.
Nope: the Mapbox iOS SDK's maps are tile-based rasters, so they don't contain street data as much as a representation of data as an image. In Mapbox GL Cocoa, it's possible to get at raw data, but it'll require a bit of experimentation, since it's a very new project.
Related
I have gone through the mapbox documentation and come to know that it provides offline support for both Android and IOS.
My requirement is to use direction functionality offline. Since We can download and store particular region, Can we load direction between two points?
Is mapbox provode such functionality. I have gone through the mapbox documentation and come to know about offline map features but is there any way to integrate offline direction using Mapbox ?
Direction is an ML-generated route API that is only accessible through the internet. Only 100,000 requests are free per month to use this service for free. To access the API you are required to use something like
https://api.mapbox.com/directions/v5/mapbox/driving/-122.42,37.78;-77.03,38.91?access_token=YOUR_MAPBOX_ACCESS_TOKEN
Read more about Mapbox offline, Mapbox directions
In Map_SDK_V6, you can see that it is charged according to the tile requests, but there is no charging content in v10. Is it cancelled?
Your offline tile usage will be billed as Vector Tiles API or Raster Tiles API requests.
According to the following paragraph, the Mapbox SDK includes Mapbox SDK contains vector Tiles API.
Which Mapbox services are included?
Maps SDK MAUs cover any use of the Vector Tiles API and Raster Tiles API in an application that includes the Maps SDK with no upfront commitments or annual contracts. These APIs are used, for example, when displaying a map view and panning around the map.
Google earth API has been abolished.
It seems that the kml file used in Google earth can be used in Cesium. Can it be used in MapBox GL JS?
Cesium does not have complete 3D building information.
MapBox GL JS has prepared 3D building information, but I don't know if it can be freely developed for in-house Web applications.
Cesium and Mapbox-GL-JS are both Javascript libraries for displaying maps using WebGL. Other than that they're extremely different. Cesium supports a globe view, 3D tiles, full 3D, textured meshes and tons of other things. Mapbox-GL-JS supports 2.5D (that is, a 2D shape with height), with limited support for compositing true 3D objects using other libraries such as Three.JS.
Cesium does not have complete 3D building information. MapBox GL JS has prepared 3D building information
Cesium and Mapbox-GL-JS are just rendering engines. However, Mapbox.com provides basemaps that contain building data.
I am a very newbie to Augmented Reality software. I want to design a simple app. As a part of this app, There will be a series of uniquely designed tags. These tags will be on some assets. In the application, I want to store some metadata for each asset. Imagine a DB table with fields like :(asset Id, name, var1, var2...) holding the asset meta-data.
So, when The augmented reality app detects a unique image then it will show its meta-data information, over the marker. It is that simple. In summary, I want to know how can I use image markers to differentiate assets? Sorry If I am asking a very basic question.
Regards,
Ferda
First of all your question is too broad. How are you planning to implement this application? First, you have to decide whether you will be using an Augmented Reality SDK or computer vision techniques?
My suggestions would be based on amount of devices you want to use this application or target platform, choosing 1 SDK from ARCore, Vuforia or ARKit. I am not familiar with ARKit but in ARCore and Vuforia, either augmented images or image targets are hold in an image database. So you can get the image id or name of any target you detected using your device. In conclusion you can visualize specific assets for specific images.
Below you can see ARCore Augmented Image database. As you can see every image has a name. In your code you can differentiate images using image.Name, then visualize corresponding meta data over the marker.
Also in both SDKs, you can define your own database but your images should not have repetitive features and have high contrast sections.
Vuforia has a similar concept as well. The difference between ARCore and Vuforia depends on what which devices you target and quality of image tracking. Vuforia can detect images better in my opinion. It is really fast in terms of detecting images.
I need to develop an iPhone 3d map application similar to virtual earth or google earth. The application will have images overlay above the 3d map, like clouds or location pin. Anyone has any ideas on that?
Regards
Edit:
Try to make the phrase not vague this time:
As far as I know, google earth and microsoft virtual earth api (3d) are not supported for any iOS devices.
In stead of redoing everything from ground up using OpenGL ES, which is the only way to do 3d in iOS devices with hardware acceleration, I want to develop a map application with established map services, such as google map. However, the map will be in 3d.
Of course, I can make a simple 3d earth using openGL ES with hardcore geo location similar to living earth HD, but I try to avoid that.
Have a look at WhirlyGlobe library : http://code.google.com/p/whirlyglobe/
You can also check out eeGeo's iOS and Android SDK, which offers vector-based 3D maps for all of the USA, Canada and UK.