How to get vuforia cloud recognition kind of feature in Arcore(unity) - unity3d

I have created a prototype application in unity using vuforia where I upload an image to myserver the server then sends the image (and associated assetbundle's link in metadata) to vuforia cloud to add it to the image target database. then in unity when camera tracks the image target I download the asset bundle to augment it.
public void OnNewSearchResult(TargetFinder.TargetSearchResult targetSearchResult)
{
TargetFinder.TargetSearchResult cloudRecoSearchResult =
(TargetFinder.TargetSearchResult)targetSearchResult;
mTargetMetadata = cloudRecoSearchResult.MetaData;
Debug.Log(mTargetMetadata);
mCloudRecoBehaviour.CloudRecoEnabled = false;
// Build augmentation based on target
if (ImageTargetTemplate)
{
Debug.Log("Image target activated");
// enable the new result with the same ImageTargetBehaviour:
ObjectTracker tracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
ImageTargetBehaviour imageTargetBehaviour =
(ImageTargetBehaviour)tracker.TargetFinder.EnableTracking(
targetSearchResult, ImageTargetTemplate.gameObject);
JsonData jd = JsonMapper.ToObject(mTargetMetadata);
string url = jd["content-url"].ToString();
Debug.Log("video url :"+ "http://192.168.2.92/arads/" + url);
vidPlayer.url = "http://192.168.2.92/arads/"+url;
vidPlayer.Prepare();
if(!vidPlayer.isPlaying)
vidPlayer.Play();
}
}
the above code is to get associated video from the server. Can I get similar functionality with arcore or arfoundation, I read that arcore's refrence image database can have 1000 images,
what if the image I am tracking is not in the current database, can I switch to different db in that case ?
do I have to download and add the image to database in the applicatoin whenever I upload a new image on the server ?
can these images in arcore have meta data like in vuforia ?

The difference between ARCore and Vuforia is in ARCore you can add images to database in run time so you do not have to use any server.
You can switch to a different database by modifying Session config using this: GoogleARCore.ARCoreSessionConfig.AugmentedImageDatabase
As i said you can add images to database in run time so as long as you have the image in your project hierarchy you can add images to database.
I do not think having a meta data is possible only information you can get is database index of the image.
Good Luck!

Related

Ionic Capacitor Camera Delete Image

Im using the capacitor camera api to get Images. Im just interested in the base64 encoded image data. So I don't need any image pathes. Im using the following code:
const image = await Camera.getPhoto({
quality: 90,
allowEditing: false,
resultType: CameraResultType.Base64
});
I noticed the local "user data" increases by each image the user makes (tested on Android). The image gets stored somewhere (on Android its: "Android/data/com.mypackage/files/Pictures") I can't test it on iOS at the moment. I guess it behaves differently there.
Is there any good way to delete those image files?
I could get the image path if I change the resultType, read the image with the file api and convert it manually into base64 but it makes the resultType setting useless.
Any ideas?
Use Filesystem API to delete image temp when you're done in the image.
If you're using ionic native camera check here https://ionicframework.com/docs/v3/native/camera/, it has a function called cleanup() - Remove intermediate image files that are kept in temporary storage after calling camera.getPicture. Applies only when the value of Camera.sourceType equals Camera.PictureSourceType.CAMERA and the Camera.destinationType equals Camera.DestinationType.FILE_URI.

Overlay text and images to YouTube live stream from Flutter app

I am looking into creating a Flutter mobile app that live streams to YouTube using the YouTube Live Streaming API. I have checked the API and found that it does not offer a way to overlay text and images onto the livestream. How would I achieve this using Flutter?
I imagine this involves using the Stack widget to overlay content on top of the user's video feed. However this would somehow need to be encoded into the video stream to be sent to YouTube.
this type of work is usually done with FFmpeg
See this discussion for more info: https://video.stackexchange.com/questions/12105/add-an-image-overlay-in-front-of-video-using-ffmpeg
FFmpeg for mobile devices is made available by this project:
https://github.com/tanersener/mobile-ffmpeg
And then, as always, we have a flutter package called flutter_ffmpeg to allow us these features on flutter
https://pub.dev/packages/flutter_ffmpeg
TLDR: You can use CameraController (Camera package) and Canvas in Flutter for drawing the text. Unfortunately CameraController.startImageStream is not documented in the API docs, and is a 1 year+ GitHub issue.
Everytime the camera plugin gives you a video frame controller.startImageStream((CameraImage img) { /* your code */}, you can draw the image onto the canvas, draw the text, capture the video and call the YouTube API. You can see an example of using the video buffer in Tensorflow Lite package here or read more info at this issue.
On this same canvas, you can draw whatever you want, like drawArc, drawParagraph, drawPoints. It gives you ultimate flexibility.
A simple example of capturing the canvas contents is here, where I have previously saved the strokes in state. (You should use details about the text instead, and just pull the latest frame from the camera.):
Future<img.Image> getDrawnImage() async {
ui.PictureRecorder recorder = ui.PictureRecorder();
Canvas canvas = Canvas(recorder);
canvas.drawColor(Colors.white, BlendMode.src);
StrokesPainter painter = StrokesPainter(
strokes: InheritedStrokesHistory.of(context).strokes);
painter.paint(canvas, deviceData.size);
ui.Image screenImage = await (recorder.endRecording().toImage(
deviceData.size.width.floor(), deviceData.size.height.floor()));
ByteData imgBytes =
await screenImage.toByteData(format: ui.ImageByteFormat.rawRgba);
return img.Image.fromBytes(deviceData.size.width.floor(),
deviceData.size.height.floor(), imgBytes.buffer.asUint8List());
}
I was going to add a link to an app I made which allows you to draw and screenshot the drawing into your phone gallery (but also uses Tensorflow Lite), but the code is a little complicated. Its probably best to clone it and see what it does if you are struggling with capturing the canvas.
I initially could not find the documentation on startImageStream and forgotten I have used it for Tensorflow Lite, and suggested using MethodChannel.invokeMethod and writing iOS/ Android specific code. Keep that in mind if you find any limitations in Flutter, although I don't think Flutter will limit you in this problem.

How can I access SceneKit files downloaded from Firebase?

I have an ARKit app that uses image recognition to trigger SceneKit/3D Object files, specifically for art exhibitions.
I've recently began implementing Firebase Storage in order to reduce the overall download size and rather download the 3D files on demand, per which exhibition the user is using!
I've successfully set up the app to download the SceneKit files to the mobile storage, but where I am stuck now is figuring out how to read the file from the specific downloaded location and continue working it in the image recognition/AR process accordingly.
Before, when the Scenekit files were included in the initial download via Scenekit Catalog in the app folder, I would read them like so:
let ShipScene = SCNScene(named: "art.scnassets/ship.scn")
Now, I've updated it to read the same string as the download URL, but the image recognition is not working.
I assume my problem is that it is not reading the right location, or I may be using the wrong function. I've put the app through to my phone through TestFlight and still no luck.
// Downloading from firebase to device URL
let shipURL = documentsURL.appendingPathComponent("file:///var/mobile/Containers/Data/Application/QZ_Gallery/SceneKitFiles/ship.scn", isDirectory: true)
let shipDownload = shipRef.write(toFile: shipURL)
// Attempting to pull the file from the downloaded location
let ShipScene = SCNScene(named:"file:///var/mobile/Containers/Data/Application/QZ_Gallery/SceneKitFiles/ship.scn" )
*** My question is in the final line of code I included. I am trying to make sure I am searching in the right place to retrieve the files that were downloaded, or if the function I am using is even proper for retrieving files stored locally on a mobile device.
Solved by first adding a function to find the scenepath of the container which has all my scenes, and then using the scene path in the fetching of the individual .scn file. Scene downloads from Firebase Storage (No use of Realtime Database) and integrates properly with the image recognition.
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let scenePath = documentsURL.appendingPathComponent("Scenes")
if(!FileManager.default.fileExists(atPath: scenePath.absoluteString)) {
do {
try FileManager.default.createDirectory(atPath: scenePath.absoluteString, withIntermediateDirectories: true, attributes: nil)
} catch {
print(error.localizedDescription);
}
}
// Start Download, writing to a file
let sceneURL = scenePath.appendingPathComponent("Ship.scn", isDirectory: true)

Can I run ARCore Preview 1 App on Preview 2 release?

I've built an app which runs on ARCOre preview 1 package on Unity. I know Google has made major changes in preview 2.
My question is what changes will I have to make in order to run my ARCore preview 1 app run on preview 2?
Take a look at the code in the Preview 2 sample app(s) and update your code accordingly. For example, here is the new code for properly instantiating an object into the AR scene:
if (Session.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
var andyObject = Instantiate(AndyAndroidPrefab, hit.Pose.position,
hit.Pose.rotation);
// Create an anchor to allow ARCore to track the hitpoint
// as understanding of the physical world evolves.
var anchor = hit.Trackable.CreateAnchor(hit.Pose);
// Andy should look at the camera but still be flush with the plane.
andyObject.transform.LookAt(FirstPersonCamera.transform);
andyObject.transform.rotation = Quaternion.Euler(0.0f,
andyObject.transform.rotation.eulerAngles.y,
andyObject.transform.rotation.z);
// Make Andy model a child of the anchor.
andyObject.transform.parent = anchor.transform;
}
Common
Preview 1 use Tango Core service that can changed Ar-Core service in Preview 2.
Automatic Screen Rotation is Handled.
Some Classes are altered like some reason of following.
For Users:
Introduce AR Stickers
For Developers:
A new C API for use with the Android NDK that complements our existing Java, Unity, and Unreal SDKs;
Functionality that lets AR apps pause and resume AR sessions, for example to let a user return to an AR app after taking a phone call;
Improved accuracy and runtime efficiency across our anchor, plane finding, and point cloud APIs.
I have updated my app from Preview 1 to Preview 2. And it's not a lot. It had minor API changes like the ones for hit flags, Pose.position etc. It would probably be stupid to post the change log here. I suggest that you can file the below steps:
Replace the old sdk with the new one in the Unity Project
Then, check for the error in your default editor, vs or vs code or mono
Just check for the relevant API's in the deveoper docs of AR.
It's not such a cumbersome job, it too me some 5-10 min to upgrade that's it.
Cheers!

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.