Unity - Google cloud speech-to-text voice recognition, Unity freezes after successful result - unity3d

A friend of mine and I are working on a VR project in Unity at the moment and we are trying to implement voice recognition as a feature. We are using Unity version 2018.3.3f1. The idea is that a user can say a word and the voice recognition will see if they pronounced it correctly. We have chosen to use the Google cloud speech-to-text service for this as it supports the target language (Norwegian). In addition, the application is also multiplayer and so we are trying to use the streaming version of Google cloud speech. Here is a link to their documentation: https://cloud.google.com/speech-to-text/docs/streaming-recognize
What we have done is to have a plugin that essentially runs the speech recognition for us. It is a modification of the example code given in the link above:
public Task<bool> StartSpeechRecognition()
{
return StreamingMicRecognizeAsync(20, "fantastisk");
}
static async Task<bool> StreamingMicRecognizeAsync(int inputTime, string inputWord)
{
bool speechSuccess = false;
Stopwatch timer = new Stopwatch();
Task delay = Task.Delay(TimeSpan.FromSeconds(1));
if (NAudio.Wave.WaveIn.DeviceCount < 1)
{
//Console.WriteLine("No microphone!");
return false;
}
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "nb",
},
InterimResults = true,
}
});
// Compare speech with the input word, finish if they are the same and speechSuccess becomes true.
Task compareSpeech = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream
.Current.Results)
{
foreach (var alternative in result.Alternatives)
{
if (alternative.Transcript.Replace(" ", String.Empty).Equals(inputWord, StringComparison.InvariantCultureIgnoreCase))
{
speechSuccess = true;
return;
}
}
}
}
});
// Read from the microphone and stream to API.
object writeLock = new object();
bool writeMore = true;
var waveIn = new NAudio.Wave.WaveInEvent();
waveIn.DeviceNumber = 0;
waveIn.WaveFormat = new NAudio.Wave.WaveFormat(16000, 1);
waveIn.DataAvailable +=
(object sender, NAudio.Wave.WaveInEventArgs args) =>
{
lock (writeLock)
{
if (!writeMore) return;
streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(args.Buffer, 0, args.BytesRecorded)
}).Wait();
}
};
waveIn.StartRecording();
timer.Start();
//Console.WriteLine("Speak now.");
//Delay continues as long as a match has not been found between speech and inputword or time that has passed since recording is lower than inputTime.
while (!speechSuccess && timer.Elapsed.TotalSeconds <= inputTime)
{
await delay;
}
// Stop recording and shut down.
waveIn.StopRecording();
timer.Stop();
lock (writeLock) writeMore = false;
await streamingCall.WriteCompleteAsync();
await compareSpeech;
//Console.WriteLine("Finished.");
return speechSuccess;
}
We made a small project in Unity to test if this was working with a cube GameObject that had this script:
private CancellationTokenSource tokenSource;
VR_VoiceRecognition.VoiceRecognition voice = new VR_VoiceRecognition.VoiceRecognition();
IDisposable speech;
// Use this for initialization
void Start() {
speech = Observable.FromCoroutine(WaitForSpeech).Subscribe();
}
// Update is called once per frame
void Update() {
}
IEnumerator WaitForSpeech()
{
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
Debug.Log("Starting up");
Task<bool> t = Task.Run(() => voice.StartSpeechRecognition());
while (!(t.IsCompleted || t.IsCanceled))
{
yield return null;
}
if (t.Status != TaskStatus.RanToCompletion)
{
yield break;
}
else
{
bool result = t.Result;
UnityEngine.Debug.Log(t.Result);
yield return result;
}
}
void OnApplicationQuit()
{
print("Closing application.");
speech.Dispose();
}
We are also using a plugin that was recommended to us by Unity support that they thought might have a workaround called UniRx (https://assetstore.unity.com/packages/tools/integration/unirx-reactive-extensions-for-unity-17276).
At the moment it is working fine when you play it in the editor for the first time. When the voice recognition returns false then everything is fine (two cases when this happens is if it cannot find a microphone or if the user does not say the specific word). However, if it is a success then it still returns true, but if you exit play mode in the editor and try to play again then Unity will freeze. Unity support suspects that it might have something to do with the Google .dll files or Google API. We are not quite sure what to do from here and we hope that someone could point us to the right direction.

Related

Spatial Alignment between Two Hololens 2 Headsets using ARFoundation/Azure Spatial Anchors

I'm working through this tutorial: https://mtaulty.com/2019/07/18/simple-shared-holograms-with-photon-networking-part-1/ with the hope of reproducing the shared coordinate system between two Hololens 2 headsets. I'm using Unity 2020, PUN2, ARFoundation and MRTK.
Because the tutorial is using WorldAnchors (WSA platform), which is a bit old, I'm trying to modify it to use ARFoundation. So far, the code I have as a result, seems to properly have the two headsets communicating via PUN2, but the blue cube as shown in the tutorial does not align between the headsets. The cube simply seems referenced to each headsets initial startup frame of reference. Below is the code. I've kept everything as one-to-one with the tutorial as possible, except where I felt I needed to swap WorldAnchors for ARAnchors and also where I swapped in a SpatialAnchorManager class to handle the Azure Spatial Service session since I found the tutorial's StartSession function didn't seem to work properly. Both AzureSpatialAnchorService.cs and PhotonScript.cs are attached to a root game object in the scene. Picture of the scene attached. Based on the debug logs I'm able to tell that the first headset is creating and saving an anchor to Azure and the second headset is able to find that same anchor. But I apparently am not performing a necessary transformation between headsets?
Can anyone suggest what I'm doing wrong and/or what specific edits that need to be made to get spatial alignment between headsets?
Thanks!
AzureSpatialAnchorService.cs:
using Microsoft.Azure.SpatialAnchors.Unity;
using Microsoft.MixedReality.Toolkit.Utilities;
using System;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.WSA;
namespace AzureSpatialAnchors
{
[RequireComponent(typeof(SpatialAnchorManager))]
public class AzureSpatialAnchorService : MonoBehaviour
{
[Serializable]
public class AzureSpatialAnchorServiceProfile
{
[SerializeField]
[Tooltip("The account id from the Azure portal for the Azure Spatial Anchors service")]
string azureAccountId;
public string AzureAccountId => this.azureAccountId;
[SerializeField]
[Tooltip("The access key from the Azure portal for the Azure Spatial Anchors service (for Key authentication)")]
string azureServiceKey;
public string AzureServiceKey => this.azureServiceKey;
}
[SerializeField]
[Tooltip("The configuration for the Azure Spatial Anchors Service")]
AzureSpatialAnchorServiceProfile profile = new AzureSpatialAnchorServiceProfile();
public AzureSpatialAnchorServiceProfile Profile => this.profile;
TaskCompletionSource<CloudSpatialAnchor> taskWaitForAnchorLocation;
//CloudSpatialAnchorSession cloudSpatialAnchorSession;
private SpatialAnchorManager _spatialAnchorManager = null;
public AzureSpatialAnchorService()
{
}
public async Task<string> CreateAnchorOnObjectAsync(GameObject gameObjectForAnchor)
{
string anchorId = string.Empty;
try
{
await this.StartSession();
Debug.Log("Started Session");
//Add and configure ASA components
CloudNativeAnchor cloudNativeAnchor = gameObjectForAnchor.AddComponent<CloudNativeAnchor>();
await cloudNativeAnchor.NativeToCloud();
Debug.Log("After NativeToCloud");
CloudSpatialAnchor cloudSpatialAnchor = cloudNativeAnchor.CloudAnchor;
cloudSpatialAnchor.Expiration = DateTimeOffset.Now.AddDays(3);
// As per previous comment.
//Collect Environment Data
while (!_spatialAnchorManager.IsReadyForCreate)
{
float createProgress = _spatialAnchorManager.SessionStatus.RecommendedForCreateProgress;
Debug.Log($"ASA - Move your device to capture more environment data: {createProgress:0%}");
}
Debug.Log($"ASA - Saving room cloud anchor... ");
await _spatialAnchorManager.CreateAnchorAsync(cloudSpatialAnchor);
anchorId = cloudSpatialAnchor?.Identifier;
bool saveSucceeded = cloudSpatialAnchor != null;
if (!saveSucceeded)
{
Debug.LogError("ASA - Failed to save, but no exception was thrown.");
return anchorId;
}
anchorId = cloudSpatialAnchor.Identifier;
Debug.Log($"ASA - Saved room cloud anchor with ID: {anchorId}");
}
catch (Exception exception) // TODO: reasonable exceptions here.
{
Debug.Log("ASA - Failed to save room anchor: " + exception.ToString());
Debug.LogException(exception);
}
return (anchorId);
}
public async Task<bool> PopulateAnchorOnObjectAsync(string anchorId, GameObject gameObjectForAnchor)
{
bool anchorLocated = false;
try
{
await this.StartSession();
this.taskWaitForAnchorLocation = new TaskCompletionSource<CloudSpatialAnchor>();
var watcher = _spatialAnchorManager.Session.CreateWatcher(
new AnchorLocateCriteria()
{
Identifiers = new string[] { anchorId },
BypassCache = true,
Strategy = LocateStrategy.AnyStrategy,
RequestedCategories = AnchorDataCategory.Spatial
}
);
var cloudAnchor = await this.taskWaitForAnchorLocation.Task;
anchorLocated = cloudAnchor != null;
if (anchorLocated)
{
Debug.Log("Anchor located");
gameObjectForAnchor.AddComponent<CloudNativeAnchor>().CloudToNative(cloudAnchor);
Debug.Log("Attached Local Anchor");
}
watcher.Stop();
}
catch (Exception ex) // TODO: reasonable exceptions here.
{
Debug.Log($"Caught {ex.Message}");
}
return (anchorLocated);
}
/// <summary>
/// Start the Azure Spatial Anchor Service session
/// This must be called before calling create, populate or delete methods.
/// </summary>
public async Task<bool> StartSession()
{
//if (this.cloudSpatialAnchorSession == null)
//{
// Debug.Assert(this.cloudSpatialAnchorSession == null);
// this.ThrowOnBadAuthConfiguration();
// // setup the session
// this.cloudSpatialAnchorSession = new CloudSpatialAnchorSession();
// // set the Azure configuration parameters
// this.cloudSpatialAnchorSession.Configuration.AccountId = this.Profile.AzureAccountId;
// this.cloudSpatialAnchorSession.Configuration.AccountKey = this.Profile.AzureServiceKey;
// // register event handlers
// this.cloudSpatialAnchorSession.Error += this.OnCloudSessionError;
// this.cloudSpatialAnchorSession.AnchorLocated += OnAnchorLocated;
// this.cloudSpatialAnchorSession.LocateAnchorsCompleted += OnLocateAnchorsCompleted;
// // start the session
// this.cloudSpatialAnchorSession.Start();
//}
_spatialAnchorManager = GetComponent<SpatialAnchorManager>();
_spatialAnchorManager.LogDebug += (sender, args) => Debug.Log($"ASA - Debug: {args.Message}");
_spatialAnchorManager.Error += (sender, args) => Debug.LogError($"ASA - Error: {args.ErrorMessage}");
_spatialAnchorManager.AnchorLocated += OnAnchorLocated;
//_spatialAnchorManager.LocateAnchorsCompleted += OnLocateAnchorsCompleted;
await _spatialAnchorManager.StartSessionAsync();
return true;
}
/// <summary>
/// Stop the Azure Spatial Anchor Service session
/// </summary>
//public void StopSession()
//{
// if (this.cloudSpatialAnchorSession != null)
// {
// // stop session
// this.cloudSpatialAnchorSession.Stop();
// // clear event handlers
// this.cloudSpatialAnchorSession.Error -= this.OnCloudSessionError;
// this.cloudSpatialAnchorSession.AnchorLocated -= OnAnchorLocated;
// this.cloudSpatialAnchorSession.LocateAnchorsCompleted -= OnLocateAnchorsCompleted;
// // cleanup
// this.cloudSpatialAnchorSession.Dispose();
// this.cloudSpatialAnchorSession = null;
// }
//}
void OnLocateAnchorsCompleted(object sender, LocateAnchorsCompletedEventArgs args)
{
Debug.Log("On Locate Anchors Completed");
Debug.Assert(this.taskWaitForAnchorLocation != null);
if (!this.taskWaitForAnchorLocation.Task.IsCompleted)
{
this.taskWaitForAnchorLocation.TrySetResult(null);
}
}
void OnAnchorLocated(object sender, AnchorLocatedEventArgs args)
{
Debug.Log($"On Anchor Located, status is {args.Status} anchor is {args.Anchor?.Identifier}, pointer is {args.Anchor?.LocalAnchor}");
Debug.Assert(this.taskWaitForAnchorLocation != null);
this.taskWaitForAnchorLocation.SetResult(args.Anchor);
}
void OnCloudSessionError(object sender, SessionErrorEventArgs args)
{
Debug.Log($"On Cloud Session Error: {args.ErrorMessage}");
}
void ThrowOnBadAuthConfiguration()
{
if (string.IsNullOrEmpty(this.Profile.AzureAccountId) ||
string.IsNullOrEmpty(this.Profile.AzureServiceKey))
{
throw new ArgumentNullException("Missing required configuration to connect to service");
}
}
}
}
PhotonScript.cs:
using System;
using System.Threading.Tasks;
using AzureSpatialAnchors;
using ExitGames.Client.Photon;
using Photon.Pun;
using Photon.Realtime;
public class PhotonScript : MonoBehaviourPunCallbacks
{
enum RoomStatus
{
None,
CreatedRoom,
JoinedRoom,
JoinedRoomDownloadedAnchor
}
public int emptyRoomTimeToLiveSeconds = 120;
RoomStatus roomStatus = RoomStatus.None;
void Start()
{
PhotonNetwork.ConnectUsingSettings();
}
public override void OnConnectedToMaster()
{
base.OnConnectedToMaster();
var roomOptions = new RoomOptions();
roomOptions.EmptyRoomTtl = this.emptyRoomTimeToLiveSeconds * 1000;
PhotonNetwork.JoinOrCreateRoom(ROOM_NAME, roomOptions, null);
}
public async override void OnJoinedRoom()
{
base.OnJoinedRoom();
// Note that the creator of the room also joins the room...
if (this.roomStatus == RoomStatus.None)
{
this.roomStatus = RoomStatus.JoinedRoom;
}
await this.PopulateAnchorAsync();
}
public async override void OnCreatedRoom()
{
base.OnCreatedRoom();
this.roomStatus = RoomStatus.CreatedRoom;
await this.CreateAnchorAsync();
}
async Task CreateAnchorAsync()
{
// If we created the room then we will attempt to create an anchor for the parent
// of the cubes that we are creating.
var anchorService = this.GetComponent<AzureSpatialAnchorService>();
var anchorId = await anchorService.CreateAnchorOnObjectAsync(this.gameObject);
// Put this ID into a custom property so that other devices joining the
// room can get hold of it.
#if UNITY_2020
PhotonNetwork.CurrentRoom.SetCustomProperties(
new Hashtable()
{
{ ANCHOR_ID_CUSTOM_PROPERTY, anchorId }
}
);
#endif
}
async Task PopulateAnchorAsync()
{
if (this.roomStatus == RoomStatus.JoinedRoom)
{
object keyValue = null;
#if UNITY_2020
// First time around, this property may not be here so we see if is there.
if (PhotonNetwork.CurrentRoom.CustomProperties.TryGetValue(
ANCHOR_ID_CUSTOM_PROPERTY, out keyValue))
{
// If the anchorId property is present then we will try and get the
// anchor but only once so change the status.
this.roomStatus = RoomStatus.JoinedRoomDownloadedAnchor;
// If we didn't create the room then we want to try and get the anchor
// from the cloud and apply it.
var anchorService = this.GetComponent<AzureSpatialAnchorService>();
await anchorService.PopulateAnchorOnObjectAsync(
(string)keyValue, this.gameObject);
}
#endif
}
}
public async override void OnRoomPropertiesUpdate(Hashtable propertiesThatChanged)
{
base.OnRoomPropertiesUpdate(propertiesThatChanged);
await this.PopulateAnchorAsync();
}
static readonly string ANCHOR_ID_CUSTOM_PROPERTY = "anchorId";
static readonly string ROOM_NAME = "HardCodedRoomName";
}
From reviewing the code and scenario, I am reading that this is in same local area. So, the ASA service will have the shared anchor as detailed in docs:
https://learn.microsoft.com/en-us/windows/mixed-reality/design/shared-experiences-in-mixed-reality
"Shared static holograms (no interactions)
Leverage Azure Spatial Anchors in your app. Enabling and sharing spatial anchors across devices allows you to create an application where users see holograms in the same place at the same time. Additional syncing across devices is needed to enable users to interact with holograms and see movements or state updates of holograms."
The tutorial that Microsoft docs point to is here if it helps comparison to the other sample:
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-01

How do I properly use DI with IHttpClientFactory in .NET MAUI

I haven't found anything about HttpClient in .NET MAUI.
Does anyone know if the service:
builder.Services.AddHttpClient<IMyService, MyService>();
is possible in MAUI's startup MauiProgram.cs? And then inject HttpClient to where it's going to be used. I have tried everything and it does not seem to work. Only AddSingleton of HttpClient works for me, but it doesn't seem optimal.
PS.: I had to install nuget package Microsoft.Extensions.Http in order to use the AddHttpClient service.
UPDATES:
WORKING CODE:
MauiProgram.cs
builder.Services.AddTransient<Service<Display>, DisplayService>();
builder.Services.AddTransient<Service<Video>, VideoService>();
builder.Services.AddTransient<Service<Image>, ImageService>();
builder.Services.AddTransient<Service<Log>, LogService>();
builder.Services.AddSingleton(sp => new HttpClient() { BaseAddress = new Uri("https://api.myapi.com") });
Example of VideosViewModel.cs using a service
[INotifyPropertyChanged]
public partial class VideosViewModel
{
readonly Service<Video> videoService;
[ObservableProperty]
ObservableCollection<Video> videos;
[ObservableProperty]
bool isEmpty;
[ObservableProperty]
bool isRefreshing;
public VideosViewModel(Service<Video> videoService)
{
this.videoService = videoService;
}
[ICommand]
internal async Task LoadVideosAsync()
{
#if ANDROID || IOS || tvOS || Tizen
UserDialogs.Instance.ShowLoading("Henter videoer fra databasen...");
#endif
await Task.Delay(2000);
Videos = new();
try
{
await foreach (Video video in videoService.GetAllAsync().OrderBy(x => x.Id))
{
Videos.Add(video);
}
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
finally
{
IsRefreshing = false;
#if ANDROID || IOS || tvOS
UserDialogs.Instance.HideLoading();
#endif
if (Videos.Count is 0)
{
IsEmpty = true;
}
else
{
IsEmpty = false;
}
}
}
[ICommand]
async Task UploadVideoAsync()
{
await Shell.Current.DisplayAlert("Upload en video", "Under opbygning - kommer senere!", "OK");
}
}
NOT WORKING CODE:
MauiProgram.cs
builder.Services.AddHttpClient<Service<Display>, DisplayService>(sp => sp.BaseAddress = new Uri("https://api.myapi.com"));
builder.Services.AddHttpClient<Service<Video>, VideoService>(sp => sp.BaseAddress = new Uri("https://api.myapi.com"));
builder.Services.AddHttpClient<Service<Image>, ImageService>(sp => sp.BaseAddress = new Uri("https://api.myapi.com"));
builder.Services.AddHttpClient<Service<Log>, LogService>(sp => sp.BaseAddress = new Uri("https://api.myapi.com"));
VideosViewModel.cs
Same as above working code.
What specifically doesn't work is that I get object reference exception on OrderBy(x => x.Id), specifically highlighted x.Id in ViewModel. Removing OrderBy method gives no longer exceptions, but the view shows no data except one random empty Frame.
Do not use builder.Services.AddHttpClient in MAUI.
Use one instance.

how do I Integrate Watson text to speech with speech to text in unity

I'm building an AR CV app in unity using the watson SDK. I'm a complete noob but I've managed to follow the videos and create something kinda cool.
The idea is that it will give the candidate a more interesting way to describe themselves than a sheet of paper. my problem is that while I've managed to get speech to text streaming done I don't know what my next steps are. It's for a university project but my tutor doesn't know either. Also if TAJ reads this thank you so much for those youtube videos!
my question is how do I add text to speech and assistant?
The basic idea here is that you will use the Watson Unity SDK services to bring speech via the microphone and convert it to text. You shouldn't send this text back to text to speech since it's what you just input (unless that's what you wanted). This text can be used in many ways. One way would be to use the Watson Assistant service and create a kind of script that you can use in natural language. The output of the message method is text that you could feed into Watson Text to Speech resulting in an audio file that could be played back. Essentially from the StreamingExample
private void OnRecognize(SpeechRecognitionEvent result, Dictionary<string, object> customData)
{
if (result != null && result.results.Length > 0)
{
foreach (var res in result.results)
{
foreach (var alt in res.alternatives)
{
// Is final for the utternace?
if (res.final)
{
MessageRequest messageRequest = new MessageRequest()
{
Input = new MessageInput()
{
Text = alt.transcript
}
};
// Send the text to Assistant
assistant.Messsage(OnMessage, OnFail, assistantId, sessionId, messageRequest);
}
}
}
}
}
private void OnMessage(MessageResponse response, Dictionary<string, object> customData)
{
// Send Assistant output to TextToSpeech
textToSpeech.ToSpeech(OnSynthesize, OnFail, response.output.generic[0].text, true)
}
private void OnSynthesize(AudioClip clip, Dictionary<string, object> customData)
{
// Play the clip from TextToSpeech
PlayClip(clip);
}
private void PlayClip(AudioClip clip)
{
if (Application.isPlaying && clip != null)
{
GameObject audioObject = new GameObject("AudioObject");
AudioSource source = audioObject.AddComponent<AudioSource>();
source.spatialBlend = 0.0f;
source.loop = false;
source.clip = clip;
source.Play();
Destroy(audioObject, clip.length);
}
}
You will need to properly instantiate and authenticate the services.

Watson keyword spotting unity

I have downloaded the Watson unity SDK and set it up like show in the picture and it works.
My question is how do I add keyword spotting?
I have read this question For Watson's Speech-To-Text Unity SDK, how can you specify keywords?
But I cant for example locate the SendStart function.
The Speech to Text service does not find keywords. To find keywords you would need to take the final text output and send it to the Alchemy Language service. Natural Language Understanding service is still being abstracted into the Watson Unity SDK but will eventually replace Alchemy Language.
private AlchemyAPI m_AlchemyAPI = new AlchemyAPI();
private void FindKeywords(string speechToTextFinalResponse)
{
if (!m_AlchemyAPI.ExtractKeywords(OnExtractKeywords, speechToTextFinalResponse))
Log.Debug("ExampleAlchemyLanguage", "Failed to get keywords.");
}
void OnExtractKeywords(KeywordData keywordData, string data)
{
Log.Debug("ExampleAlchemyLanguage", "GetKeywordsResult: {0}", JsonUtility.ToJson(resp));
}
EDIT 1
Natural Language Understanding has been abstracted in tot he Watson Unity SDK.
NaturalLanguageUnderstanding m_NaturalLanguageUnderstanding = new NaturalLanguageUnderstanding();
private static fsSerializer sm_Serializer = new fsSerializer();
private void FindKeywords(string speechToTextFinalResponse)
{
Parameters parameters = new Parameters()
{
text = speechToTextFinalResponse,
return_analyzed_text = true,
language = "en",
features = new Features()
{
entities = new EntitiesOptions()
{
limit = 50,
sentiment = true,
emotion = true,
},
keywords = new KeywordsOptions()
{
limit = 50,
sentiment = true,
emotion = true
}
}
if (!m_NaturalLanguageUnderstanding.Analyze(OnAnalyze, parameters))
Log.Debug("ExampleNaturalLanguageUnderstanding", "Failed to analyze.");
}
private void OnAnalyze(AnalysisResults resp, string customData)
{
fsData data = null;
sm_Serializer.TrySerialize(resp, out data).AssertSuccess();
Log.Debug("ExampleNaturalLanguageUnderstanding", "AnalysisResults: {0}", data.ToString());
}
EDIT 2
Sorry, I didn't realize Speech To Text had the ability to do keyword spotting. Thanks to Nathan for pointing that out to me! I added this functionality into a future release of Speech to Text in the Unity SDK. It will look like this for the Watson Unity SDK 1.0.0:
void Start()
{
// Create credential and instantiate service
Credentials credentials = new Credentials(_username, _password, _url);
_speechToText = new SpeechToText(credentials);
// Add keywords
List<string> keywords = new List<string>();
keywords.Add("speech");
_speechToText.KeywordsThreshold = 0.5f;
_speechToText.Keywords = keywords.ToArray();
_speechToText.Recognize(_audioClip, HandleOnRecognize);
}
private void HandleOnRecognize(SpeechRecognitionEvent result)
{
if (result != null && result.results.Length > 0)
{
foreach (var res in result.results)
{
foreach (var alt in res.alternatives)
{
string text = alt.transcript;
Log.Debug("ExampleSpeechToText", string.Format("{0} ({1}, {2:0.00})\n", text, res.final ? "Final" : "Interim", alt.confidence));
if (res.final)
_recognizeTested = true;
}
if (res.keywords_result != null && res.keywords_result.keyword != null)
{
foreach (var keyword in res.keywords_result.keyword)
{
Log.Debug("ExampleSpeechToText", "keyword: {0}, confidence: {1}, start time: {2}, end time: {3}", keyword.normalized_text, keyword.confidence, keyword.start_time, keyword.end_time);
}
}
}
}
}
Currently you can find the refactor branch here. This release is a breaking change and has all of the higher level (widgets, config, etc) functionality removed.

UWP trying to run background service throwing exception

I am trying to run background service in UWP application. I am first checking if application has background permission. If yes then I am registering the service for running.
This code was working fine until I updated Visual Studio along with Windows 10 SDK to Creators Update version. Now I can't figure out if this update changes things for registering background service.
using System;
using Windows.ApplicationModel.Background;
using BackgroundService;
using SampleApp.Config;
namespace SampleApp.Background
{
class BackgroundClass
{
LocalConfig LC = new LocalConfig();
public async void RequestBackgroundAccess()
{
var result = await BackgroundExecutionManager.RequestAccessAsync();
switch (result)
{
case BackgroundAccessStatus.AllowedMayUseActiveRealTimeConnectivity:
break;
case BackgroundAccessStatus.AllowedWithAlwaysOnRealTimeConnectivity:
break;
case BackgroundAccessStatus.Denied:
break;
case BackgroundAccessStatus.Unspecified:
break;
}
}
public async void RegisterBackgroundSync()
{
var trigger = new ApplicationTrigger();
var condition = new SystemCondition(SystemConditionType.InternetAvailable);
if (!LC.BackgroundSyncStatusGET())
{
var task = new BackgroundTaskBuilder
{
Name = nameof(BackgroundSync),
CancelOnConditionLoss = true,
TaskEntryPoint = typeof(BackgroundSync).ToString(),
};
task.SetTrigger(trigger);
task.AddCondition(condition);
task.Register();
LC.BackgroundSyncStatusSET(true);
}
await trigger.RequestAsync(); //EXCEPTION HAPPENS AT THIS LINE
}
public void RegisterBackgroundService(uint time)
{
var taskName = "BackgroundService";
foreach (var unregisterTask in BackgroundTaskRegistration.AllTasks)
{
if (unregisterTask.Value.Name == taskName)
{
unregisterTask.Value.Unregister(true);
}
}
if(time != 0)
{
var trigger = new TimeTrigger(time, false);
var condition = new SystemCondition(SystemConditionType.InternetAvailable);
var task = new BackgroundTaskBuilder
{
Name = nameof(BackgroundService),
CancelOnConditionLoss = true,
TaskEntryPoint = typeof(BackgroundService).ToString(),
};
task.SetTrigger(trigger);
task.AddCondition(condition);
task.Register();
}
}
}
}
Now while requesting I am checking if background service is registered keeping issues for re-registration. I am getting following exception
System.Runtime.InteropServices.COMException occurred
HResult=0x80004005
Message=Error HRESULT E_FAIL has been returned from a call to a COM component.
Source=Windows
 
StackTrace:
  
at Windows.ApplicationModel.Background.ApplicationTrigger.RequestAsync()
  
at SampleApp.Background.BackgroundClass.d__2.MoveNext()
Please Help
Had this same problem, was in my Windows 10 Privacy Settings.
System Settings => Privacy Settings
In the left-hand menu choose Background apps.
Check to make sure your app hasn't been blocked from running background tasks.