SpeechSynthesizer how to make the sound more human kind - forms

I have a form application where I use the System voice to read words. However, it sounds robotic, how to make it more human-like?
I am using this:
using namespace System::Speech::Synthesis;
AND this:
SpeechSynthesizer^speaker=gcnew SpeechSynthesizer();
speaker->SpeakAsync(textBox1->Text);
The program works though, but I want it to sound like a human.

System.Speech.Synthesis is very VERY old (15+ years old). Thus, it sounds robotic.
You could try using the latest speech platform from Microsoft. NOTE: I worked on System.Speech and on this latest speech platform (Cognitive Services Speech). You can find samples and instructions on how to use this platform here: https://aka.ms/speech/sdk.
Additionally, if you'd like to hear what these voices sound like, you can listen to samples of them here: https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/
--robch

Related

Is there a way to generate tone from flutter?

I have been searching for the ways about it, but seems like there is not much support for generating tone as of right now, or needs to go through the process of generating a tone file, which then is played right after.
Is there a way to generate song then plays next, without the process of generating the file then plays this file?
I don't even think Flutter provides an out-of-the-box way to play sounds. You need to rely on packages :
If you're looking to trigger native "beep" sounds from the OS : flutter_beep
If you want to generate a soundwave : sound_generator

Fluter/Dart code loading over network

Recently, I watched the first introduction of Flutter originally named Sky on Youtube https://www.youtube.com/watch?time_continue=10&v=PnIWl33YMwA .
At 1:54 Eric Seidel says something like this - This all is loaded over the network. Dart code of the network. What happenend to it in Flutter?
Is it possible to load Dart code like new versions directly over the network without using the AppStore?
I'm not sure Eric was talking about the data or the actual code. It does sound like he meant both.
It may have been possible to load code over the network because on these early days they shipped the dart VM on releases and code was JIT compiled. Since late 2015 Flutter moved to Dart's AOT compilation (see this video).
So no, it's not possible to update your flutter apps through the network.
This all is loaded over the network. Dart code of the network
After watching the video, I got the context of the line. It means the data getting fetched from the network and the code written is in the Dart rather than Java.

How Marmalade support text-to-speech?

Is there any API/SDK to support tts (text to speech) in Marmalade?
I've had some success porting Flite (http://www.speech.cs.cmu.edu/flite/) to the Marmalade environment. It produces wave files and raw, in-memory buffers (which can then be played using s3eSound directly) fine.
The s3eSound adapter (which plays the text directly from within flite) is a work in progress, so, while it does produce something close to recognisable speech, it is also obviously bugged. For my purposes, the raw buffers are more useful anyway, but feel free to try to fix it up.
You can see what I've done here https://github.com/madmaw/marmalade-flite
There is no specific API provided by Marmalade however you may be able to use the EDK if native APIs provide this functionality on iOS or Android.
https://www.madewithmarmalade.com/devnet/docs#/main/extensions.html

Soundtouch bpm iPhone

I'm trying to integrate a mechanism to calculate the BPM of the song in the iPod library(also on iphone).
Searching on the web I found that the most used and reliable libraries to do this things is soundtouch.Anyone has experience with this library? It is computationally possible to make it run on the iPhone hardware?
I have recently been using the code from the BPMDetect class of the soundtouch library succesfully. Initially compiled it on C++, later on translated the code to C# and lately I've been using the C++ code on an Android app through JNI. I'm not really familiar with development in iOS but I'm almost certain that it is possible what you're trying to do.
The only files you should use from the soundtouch source code are the following:
C++ files
BPMDetect.cpp
FIFOSampleBuffer.cpp
PeakFinder.cpp
Header files
BPMDetect.h
FIFOSampleBuffer.h
FIFOSamplePipe.h
PeakFinder.h
soundtouch_config.h
STTypes.h
At least these are the only ones I had to use to make it work.
The BPMDetect class recieves raw samples through its inputSamples() method, it's capable of calculating a bpm value even when the whole file is not yet loaded into its buffer. I have found that these intermediate values differ from the one obtained once the whole file is loaded, which is more accurate, in my experience.
Hope this helps.
EDIT:
It's a kind of complex process to explain in a comment so I'm going to edit the answer.
The gist of it is that you need your android app to consume native code. In order to do that, you need to compile the files listed above from the soundtouch library with the Android NDK toolset.
That will leave you with native code that will be able to process raw sound data, but you still need to get the data from the sound file, which you can do several ways, I think. The way I was doing it was using the FMOD library for Android, here's a nice example for that: FMOD for Android.
Supposing you declared a method like this in your C code:
void Java_your_package_YourClassName_cPlay(JNIEnv *env, jobject thiz)
{
sound->play();
}
On the Android app you use your native methods in the following way:
public class Sound {
// Native method declaration
private native void cPlay();
public void play()
{
cPlay();
}
}
In order to have a friendlier API to work with you can create wrappers around these function calls.
I put the native C code I was using in a gist here.
Hope this helps.

iPhone, OpenCV and CvBlobDetector

I found Yoshimasa Niwa's article about blob detection here:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
And something on realtime face detection here:
http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/
But what I really want to do is realtime blob detection (like http://www.youtube.com/watch?v=LIgsVoCXTXM) using the iPhone 4 camera.
I can find the headers for CvBlobDetector in cvvidsurv.hpp. But trying to use that without modification is not the right thing to do.
How do I get CvBlobDetector to work? Or is there an alternate solution?
Make sure you've followed the instructions to use it properly:
http://opencv.willowgarage.com/wiki/cvBlobsLib
One of the alternative solutions i used and it works good is:
http://code.google.com/p/cvblob/