How can I adjust microphone input levels in HoloLens? - unity3d

In our communications app, some people's voices are too quiet. So we want to be able to change the system level of their microphone input.
I searched through all the Windows Universal App samples and Unity documentation and I couldn't find how to change the volume of the Windows microphone (on Windows or HoloLens).

I found that the property to adjust is the AudioDeviceController.VolumePercent property. The following code implements this:
MediaCapture mediaCapture = new MediaCapture();
var captureInitSettings = new MediaCaptureInitializationSettings
{
StreamingCaptureMode = StreamingCaptureMode.Audio
};
await mediaCapture.InitializeAsync(captureInitSettings);
mediaCapture.AudioDeviceController.VolumePercent = volumeLevel;
I confirmed that this code works on Desktop and HoloLens. It changes the system level, so it's automatically persisted, and would affect all apps.

Related

Can I run ARCore Preview 1 App on Preview 2 release?

I've built an app which runs on ARCOre preview 1 package on Unity. I know Google has made major changes in preview 2.
My question is what changes will I have to make in order to run my ARCore preview 1 app run on preview 2?
Take a look at the code in the Preview 2 sample app(s) and update your code accordingly. For example, here is the new code for properly instantiating an object into the AR scene:
if (Session.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
var andyObject = Instantiate(AndyAndroidPrefab, hit.Pose.position,
hit.Pose.rotation);
// Create an anchor to allow ARCore to track the hitpoint
// as understanding of the physical world evolves.
var anchor = hit.Trackable.CreateAnchor(hit.Pose);
// Andy should look at the camera but still be flush with the plane.
andyObject.transform.LookAt(FirstPersonCamera.transform);
andyObject.transform.rotation = Quaternion.Euler(0.0f,
andyObject.transform.rotation.eulerAngles.y,
andyObject.transform.rotation.z);
// Make Andy model a child of the anchor.
andyObject.transform.parent = anchor.transform;
}
Common
Preview 1 use Tango Core service that can changed Ar-Core service in Preview 2.
Automatic Screen Rotation is Handled.
Some Classes are altered like some reason of following.
For Users:
Introduce AR Stickers
For Developers:
A new C API for use with the Android NDK that complements our existing Java, Unity, and Unreal SDKs;
Functionality that lets AR apps pause and resume AR sessions, for example to let a user return to an AR app after taking a phone call;
Improved accuracy and runtime efficiency across our anchor, plane finding, and point cloud APIs.
I have updated my app from Preview 1 to Preview 2. And it's not a lot. It had minor API changes like the ones for hit flags, Pose.position etc. It would probably be stupid to post the change log here. I suggest that you can file the below steps:
Replace the old sdk with the new one in the Unity Project
Then, check for the error in your default editor, vs or vs code or mono
Just check for the relevant API's in the deveoper docs of AR.
It's not such a cumbersome job, it too me some 5-10 min to upgrade that's it.
Cheers!

Adjust camera focus in ARKit

I want to adjust the device's physical camera focus while in augmented reality. (I'm not talking about the SCNCamera object.)
In an Apple Dev forum post, I've read that autofocus would interfere with ARKit's object detection, which makes sense to me.
Now, I'm working on an app where the users will be close to the object they're looking at. The focus the camera has by default makes everything look very blurry when closer to an object than around 10cm.
Can I adjust the camera's focus before initializing the scene, or preferably while in the scene?
20.01.2018
Apparently, there's still no solution to this problem. You can read more about this at this reddit post and this developer forum post for private API workarounds and other (non-helping) info.
25.01.2018
#AlexanderVasenin provided a useful update pointing to Apple's documentation. It shows that ARKit will be able to support not just focusing, but also autofocusing as of iOS 11.3.
See my usage sample below.
As stated by Alexander, iOS 11.3 brings autofocus to ARKit.
The corresponding documentation site shows how it is declared:
var isAutoFocusEnabled: Bool { get set }
You can access it this way:
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
However, as it is true by default, you should not even have to set it manually, unless you chose to opt out.
UPDATE: Starting from iOS 11.3 ARKit supports autofocusing, and it's enabled by default (more info). Manual focusing still aren't available.
Prior to iOS 11.3 ARKit did not supported neither manual focus adjust nor autofocusing.
Here is Apple's reply on the subject (Oct 2017):
ARKit does not run with autofocus enabled as it may adversely affect plane detection. There is an existing feature request to support autofocus and no need to file additional requests. Any other focus discrepancies should be filed as bug reports. Be sure to include the device model and OS version. (source)
There is another thread on Apple forums where a developer claims he was able to adjust autofocus by calling AVCaptureDevice.setFocusModeLocked(lensPosition:completionHandler:) method on private AVCaptureDevice used by ARKit and it appears it's not affecting tracking. Though the method itself is public, the ARKit's AVCaptureDevice is not, so using this hack in production would most likely result in App Store rejection.
if #available(iOS 16.0, *) {
// This property is nil on devices that aren’t equiped with an ultra-wide camera.
if let device = ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera {
do {
try device.lockForConfiguration ()
// configuration your focus mode
// you need to change ARWorldTrackingConfiguration().isAutoFocusEnabled at the same time
device.unlockForConfiguration ()
} catch {
}
}
} else {
// Fallback on earlier versions
}
Use configurableCaptureDeviceForPrimaryCamera method, and this is only available after iOS 16 or later.
Documentation
/
ARKit
/
Configuration Objects
/
ARConfiguration
/
configurableCaptureDeviceForPrimaryCamera

How to restrict use of third party camera app from your app

I have power cam app which is a third party camera app installed in my device. I am opening camera from my app, when i click on open camera button, it gives me choice of cameras like device camera along with power cam. I want that on clicking the open camera button, device camera should get open, in other words i want to restrict the user from using power cam from my app
If you want to run only the official camera, you can use the following intent (based on the official tutorial):
static final int REQUEST_IMAGE_CAPTURE = 1;
private void dispatchTakePictureIntent() {
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
takePictureIntent.setPackage("com.android.camera");
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE);
}
}
Unfortunately, many devices come with custom preinstalled camera apps, and com.android.camera may not be available.
If you want to filter out specific packages that you don't like, you must prepare your own chooser dialog (see example here). You can skip the dialog if you know which package to choose. E.g. it is possible to filter the list to only include "system" packages. But even then, there is no guarantee that there will be only one system package that is registered to fulfill the MediaStore.ACTION_IMAGE_CAPTURE intent.

Prevent iOS mobile safari from going idle / auto-locking / sleeping?

In an iOS app you can set application.idleTimerDisabled = YES to prevent the phone from auto locking.
I need to do this in mobile safari for a game like Doodle Jump where the user may not touch the screen for an extended period of time. Is there any documented method or hack to do this?
(Update)
They seem to be doing it somehow in this site http://www.uncoveryourworld.com. Visit from your iphone and when you get to the buildings/street scene with music playing in the background just leave your phone alone. It never goes to sleep.
(Update 2)
I've spent some time taking a closer look at how they might be keeping the phone from going to sleep. I've done a barebones test and it seems that the way they are looping the audio in the street scene is what keeps it from going to sleep. If you'd like to test this just put a simple audio player that loops on your page and click play:
<audio src="loop.mp3" onended="this.play();" controls="controls" autobuffer></audio>
Everywhere I searched it is being said that this isn't possible, so it is nice to see there is at least some way to do it even if a bit of a hack. Otherwise a browser based game with doodle-jump style play would not be possible. So you could have a loop in your game/app if appropriate or just play a silent loop.
NoSleep.js seems to work in iOS 11 and it reportedly works on Android as well.
Old answer
This is a simple HTML-only method to do that: looping inline autoplaying videos (it might also work in Android Chrome 53+)
<video playsinline muted autoplay loop src="https://rawgit.com/bower-media-samples/big-buck-bunny-480p-30s/master/video.mp4" height=60></video>
See the same demo on CodePen (includes a stopwatch)
Notes
Avoid loading a big video just for this. Perhaps make a short, tiny, black-only video or use
To make it fully work, the videos needs to be always in the viewport or you need to start its playback via JS: video.play()
Edit: This work around no longer works. It is not currently possible to prevent the phone from sleeping in safari.
Yes, you can prevent the phone to sleep using an audio loop. The trick won't start automatically, you will have to play it when the visitor touches the screen.
<audio loop src="http://www.sousound.com/music/healing/healing_01.mp3"></audio>
Test page: tap play and the display will stay on but it will dim on some devices, like an iPhone with iOS 7.
Note: be careful using this trick because it will stop any music that the visitors might be using—and it will annoy them.
No, you can't do this, unfortunately. The only way to achieve this is by making a UIWebView-application and setting the variable you provided there.
https://stackoverflow.com/a/7477438/267892
[edit] random bug behavior, sometimes lockscreen media controls showing, sometimes not
Years later, updated my code
Easy steps :
unlock audio context
create silent sound
loop it and play forever
keep tab active
Working on Safari iOs 15.3.1, tab & browser in background, screen off
// unlock audio context
let ctx = null;
// create silent sound
let bufferSize = 2 * ctx.sampleRate,
emptyBuffer = ctx.createBuffer(1, bufferSize, ctx.sampleRate),
output = emptyBuffer.getChannelData(0);
// fill buffer
for(let i = 0; i < bufferSize; i++)
output[i] = 0;
// create source node
let source = ctx.createBufferSource();
source.buffer = emptyBuffer;
source.loop = true;
// create destination node
let node = ctx.createMediaStreamDestination();
source.connect(node);
// dummy audio element
let audio = document.createElement("audio");
audio.style.display = "none";
document.body.appendChild(audio);
// set source and play
audio.srcObject = node.stream;
audio.play();
// background exec enabled
Even if this approach might not be suitable in every case, you can prevent your phone from locking by reloading the page using Javascript.
// This will trigger a reload after 30 seconds
setTimeout(function(){
self.location = self.location
}, 30000);
Please note that I tested this with iOS7 beta 3
You can stop sleeping and screen dimming in iOS Safari by faking a refresh every 20–30 seconds
var stayAwake = setInterval(function () {
location.href = location.href; //try refreshing
window.setTimeout(window.stop, 0); //stop it soon after
}, 30000);
Please use this code responsibly, don't use it "just because". If it's only needed for a bit, disable it.
clearInterval(stayAwake); //allow device sleep again when not needed
Tested in Safari iOS 7, 7.1.2, and 8.1, but it may not work in UIWebView browsers like Chrome for iOS or the Facebook app.
Demo: http://jsbin.com/kozuzuwaya/1
bfred.it's answer works if you replace the audio-tag with a enter code here -tag - but only if the page is open in iOS10+ Safari AND the user has started the video. You can hide the video with CSS.
Also, I suspect that this feature will also be removed at some point.
This is based on nicopowa's answer, which saves a PWA from being suspended by iOS. (Playing an infinite loop of nothing keeps the app running - even with the screen turned off.)
In order to also make sure that it's triggered by user interaction,
the only thing to change is instead of
let ctx = null
put
let ctx = new AudioContext()

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.