I couldn't find it in the introductory docs
https://www.mapbox.com/help/how-web-apps-work/#mapbox-gl-js
Thanks
GL is a reference to WebGL (Web Graphics Library). So it stands for 'Graphics Library'
https://en.wikipedia.org/wiki/WebGL
Mapbox uses Mapbox GL JS, a client side renderer, so it users JavaScript and WebGL to draw the data dynamically with speed and smoothness of a video game.
A more detailed explanation can be found on this SO answer...
Mapbox GL JS vs. Mapbox.js
Esentially MapBox GL is a upgrade of Mapbox JS to exploit the client side WebGL technology (amongst other upgrades).
Related
I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking
Google earth API has been abolished.
It seems that the kml file used in Google earth can be used in Cesium. Can it be used in MapBox GL JS?
Cesium does not have complete 3D building information.
MapBox GL JS has prepared 3D building information, but I don't know if it can be freely developed for in-house Web applications.
Cesium and Mapbox-GL-JS are both Javascript libraries for displaying maps using WebGL. Other than that they're extremely different. Cesium supports a globe view, 3D tiles, full 3D, textured meshes and tons of other things. Mapbox-GL-JS supports 2.5D (that is, a 2D shape with height), with limited support for compositing true 3D objects using other libraries such as Three.JS.
Cesium does not have complete 3D building information. MapBox GL JS has prepared 3D building information
Cesium and Mapbox-GL-JS are just rendering engines. However, Mapbox.com provides basemaps that contain building data.
The way I understand it is that there are several environments that support ARCore and Unity and Sceneform SDK are some of the options.
I was wondering how are they different from each other besides one being in Java and the other being in C#? Why would someone choose one over the other aside from language preference?
Thank you
Sceneform empowers Android developers to work with ARCore without learning 3D graphics and OpenGL. It includes a high-level scene graph API, realistic physically based renderer, an Android Studio plugin for importing, viewing, and building 3D assets, and easy integration into ARCore that makes it straightforward to build AR apps. Visit this video link of Google I/O '18.
Whereas ARCore in Unity uses three key capabilities to integrate virtual content with the real world as seen through your phone's camera:
Motion tracking
Environmental understanding allows the phone to detect the size
and location of all type of surfaces: horizontal, vertical and
angled surfaces like the ground, a coffee table or walls.
Light estimation allows the phone to estimate the environment's
current lighting conditions.
ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.
I'm already make an augmented reality app that can read images marker, but I wonder if i can make augmented reality without marker in unity.
Can anyone tell me how?
Maybe What you need is SLAM, I think. Simultaneous Localization And Mapping, and is markerless, just recognite and track the environment.
These are 2 videos about slam:
https://www.youtube.com/watch?v=HbaEw5-YvA0
https://www.youtube.com/watch?v=_YLzcWX-gWU
One is from kudan, one is from wikitude. If this feature is what you want. Then I am sure that what you need is SLAM.
You can get more from:
wikitude slam
kudan
And, vuforia is not adviced. In fact, Smart Terrain does recoginate the environment, it still need a marker.
You can use Google ARCore, Vuforia or 8thWall SDKs. All of them have motion tracking or extended tracking. So you do not have to use markers. You can take a look at ARCore HelloAR example:
https://developers.google.com/ar/develop/unity/tutorials/hello-ar-sample
you can using ARUnity. ARUnity is the Unity plugin for ARToolKit. (Well the marker would be some kind of image which is used for tracking. ARToolKit calls it NFT.)
You can download it here:
http://www.artoolkit.org/download-artoolkit-sdk
(Scroll down for the ARUnity download link)
Documentation is available here:
http://www.artoolkit.org/documentation/doku.php?id=6_Unity:unity_getting_started
Best is, it is free and open-source :).
Is there a way to identify streets and roads on MapBox maps and get information about their shapes, etc?
I am talking about something like this: http://www.playmapscube.com/ The ball follows the paths created by streets.
Nope: the Mapbox iOS SDK's maps are tile-based rasters, so they don't contain street data as much as a representation of data as an image. In Mapbox GL Cocoa, it's possible to get at raw data, but it'll require a bit of experimentation, since it's a very new project.