I'm building a Flutter app which simply gives notification while we enter into Instagram.
Is there any way to check if Instagram is opened or not in Flutter?
Any answers will be appreciated. Thank you!
You can use this plugin usage_stats
List<UsageInfo> list = await UsageStats.queryUsageStats(startDate, endDate);
and check if
List<UsageInfo> appList = [];
for (var i in list) {
if (listOfApps.contains(i.packageName)) {
if (!appList.contains(i)) {
appList.add(i);
}
}
}
for (var usageStats in appList) {
if(DateTime.now().millisecondsSinceEpoch) >= int.parse(usageStats.lastTimeUsed!) + 1000)
{
//Instagram is opened
}
}
const List<String> listOfApps = [
'com.instagram.android', // android package name of instagram
];
Hope its helpfull
Related
I am trying to make a music player in Flutter and it is using the id3 package to read metadata from the given media file. But for some reason it is only reading the first character or returning null for the fields.
This is the code
import 'package:id3/id3.dart';
Map<String, dynamic> meta = {'': ''};
void getMetadata(String path) {
MP3Instance mp3instance = new MP3Instance(path);
if (mp3instance.parseTagsSync()) {
meta = mp3instance.getMetaTags();
print(meta);
}
}
String getTitle() {
return meta["Title"];
}
String getArtist() {
return meta["Artist"];
}
String getAlbum() {
return meta["Album"];
}
String getAPIC() {
String apic = meta["APIC"].toString();
int index = apic.indexOf('base64:');
int indexEnd = apic.indexOf('}');
if (index != -1) {
return apic.substring((index + 8), indexEnd);
} else {
return apic;
}
}
and I am taking the values in an initState() to inflate the list view as follows
void initState() {
_returnedSongs = returnMusicList();
for (FileSystemEntity entity in _returnedSongs) {
path.add(entity.path);
}
for (String str in path) {
getMetadata(str);
title = getTitle();
artist = getTitle();
album = getAlbum();
apic = getAPIC();
listMusic.add(Music(title, artist, album, apic));
}
super.initState();
}
from this issue seems like a bug within the plugin and one of the recommendations is to use flutter_audio_query
A Flutter plugin, Android only at this moment, that allows you query for audio metadata info about artists, albums, songs audio files and genres available on device storage. All work is made using Android native MediaStore API with ContentResolver API and query methods run in background thread. AndroidX support it's OK!
When calling textfield on flutter in fireOS in the fire tv devices, to do a search for example the fireOS virtual keyboard pops on top of textfield and doesnt work like on other android devices where the keyboard is on the bottom and textfield is visible.
On android legacy for example i can use edittext widget and the same keyboard pops on top but whatever i type with the controller updates on the virtual keyboard itself, because the keyboard has its own textfield or edittext. So my problem is how could i update the edittext on firetv virtual keyboard with flutter.
Okay, so I didnt find an answer anywhere so I had to do some hacky stuff heres how i got it to work since flutter is a no go.
Solution Overview:
1.- So first check is you are running on Android, you can do this with if (Platform.isAndroid) on flutter.
2.- If you are actually running on android you can then open a platform channel to native android to check the actual manufacturer(I will post how to code below).
3.- Check manufacturer or device name for "Amazon" or "Kindle" or whatever an if(string.contains("")) will do the trick.
4.- Open again a platform channel to Native Android and open an Alert Dialog with an Edittext, capture the resulting string and return it to flutter.
And thats how i got firetv's keyboard to work under flutter.
if (Platform.isAndroid){
checkOs().then((String osName){
print("Device running on: $osName");
if(osName.contains("Amazon") || osName.contains("AFTN")){
fireTvKeyboardInput().then((String result){
buscarTitulo(result);
});
}else{
_showDialog(); // Keyboard for NON FIREOS devices on Android.
}
});
}else{
//IF Device is not Android Eg. IOS
_showDialog();
}
Now theres two functions i used "checkOs" and "fireTvKeyboardInput" heres the code:
Future<String> checkOs() async {
String myResult = "";
try {
myResult = await platform.invokeMethod("checkOS", <String, dynamic>{
'param1': "hello",
});
}catch (e){
print ("exception: $e");
}
return myResult;
}
Future<String> fireTvKeyboardInput() async {
String myResult = "";
try {
myResult = await platform.invokeMethod("fireKeyBoard", <String, dynamic>{
'param1': "hello",
});
}catch (e){
print ("exception: $e");
}
return myResult;
}
On Native Android heres the code:
if(call.method == "checkOS"){
val operatingSystem = android.os.Build.MANUFACTURER + "- " + android.os.Build.MODEL
result.success(operatingSystem)
}
if(call.method == "fireKeyBoard"){
val alert = AlertDialog.Builder(this)
alert.setMessage("Search")
// Set an EditText view to get user input
val input = EditText(this)
input.hint = "Enter Text"
input.inputType = InputType.TYPE_CLASS_TEXT
alert.setView(input)
input.setOnKeyListener { view, keyCode, keyEvent ->
if (keyCode == 66) {
val imm = getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
imm.hideSoftInputFromWindow(input.windowToken, 0)
}
false
}
alert.setPositiveButton("Ok") { dialog, whichButton ->
result.success(input.text.toString());
}
alert.setNegativeButton("Cancel") { dialog, whichButton ->
// Canceled.
}
alert.show()
}
I have downloaded the Watson unity SDK and set it up like show in the picture and it works.
My question is how do I add keyword spotting?
I have read this question For Watson's Speech-To-Text Unity SDK, how can you specify keywords?
But I cant for example locate the SendStart function.
The Speech to Text service does not find keywords. To find keywords you would need to take the final text output and send it to the Alchemy Language service. Natural Language Understanding service is still being abstracted into the Watson Unity SDK but will eventually replace Alchemy Language.
private AlchemyAPI m_AlchemyAPI = new AlchemyAPI();
private void FindKeywords(string speechToTextFinalResponse)
{
if (!m_AlchemyAPI.ExtractKeywords(OnExtractKeywords, speechToTextFinalResponse))
Log.Debug("ExampleAlchemyLanguage", "Failed to get keywords.");
}
void OnExtractKeywords(KeywordData keywordData, string data)
{
Log.Debug("ExampleAlchemyLanguage", "GetKeywordsResult: {0}", JsonUtility.ToJson(resp));
}
EDIT 1
Natural Language Understanding has been abstracted in tot he Watson Unity SDK.
NaturalLanguageUnderstanding m_NaturalLanguageUnderstanding = new NaturalLanguageUnderstanding();
private static fsSerializer sm_Serializer = new fsSerializer();
private void FindKeywords(string speechToTextFinalResponse)
{
Parameters parameters = new Parameters()
{
text = speechToTextFinalResponse,
return_analyzed_text = true,
language = "en",
features = new Features()
{
entities = new EntitiesOptions()
{
limit = 50,
sentiment = true,
emotion = true,
},
keywords = new KeywordsOptions()
{
limit = 50,
sentiment = true,
emotion = true
}
}
if (!m_NaturalLanguageUnderstanding.Analyze(OnAnalyze, parameters))
Log.Debug("ExampleNaturalLanguageUnderstanding", "Failed to analyze.");
}
private void OnAnalyze(AnalysisResults resp, string customData)
{
fsData data = null;
sm_Serializer.TrySerialize(resp, out data).AssertSuccess();
Log.Debug("ExampleNaturalLanguageUnderstanding", "AnalysisResults: {0}", data.ToString());
}
EDIT 2
Sorry, I didn't realize Speech To Text had the ability to do keyword spotting. Thanks to Nathan for pointing that out to me! I added this functionality into a future release of Speech to Text in the Unity SDK. It will look like this for the Watson Unity SDK 1.0.0:
void Start()
{
// Create credential and instantiate service
Credentials credentials = new Credentials(_username, _password, _url);
_speechToText = new SpeechToText(credentials);
// Add keywords
List<string> keywords = new List<string>();
keywords.Add("speech");
_speechToText.KeywordsThreshold = 0.5f;
_speechToText.Keywords = keywords.ToArray();
_speechToText.Recognize(_audioClip, HandleOnRecognize);
}
private void HandleOnRecognize(SpeechRecognitionEvent result)
{
if (result != null && result.results.Length > 0)
{
foreach (var res in result.results)
{
foreach (var alt in res.alternatives)
{
string text = alt.transcript;
Log.Debug("ExampleSpeechToText", string.Format("{0} ({1}, {2:0.00})\n", text, res.final ? "Final" : "Interim", alt.confidence));
if (res.final)
_recognizeTested = true;
}
if (res.keywords_result != null && res.keywords_result.keyword != null)
{
foreach (var keyword in res.keywords_result.keyword)
{
Log.Debug("ExampleSpeechToText", "keyword: {0}, confidence: {1}, start time: {2}, end time: {3}", keyword.normalized_text, keyword.confidence, keyword.start_time, keyword.end_time);
}
}
}
}
}
Currently you can find the refactor branch here. This release is a breaking change and has all of the higher level (widgets, config, etc) functionality removed.
i have installed Facebook Android app,but my code
public void shareUsingNativeDialog() {
if (playerChoice == INVALID_CHOICE || computerChoice == INVALID_CHOICE) {
ShareContent content = getLinkContent();
// share the app
if (shareDialog.canShow(content, ShareDialog.Mode.NATIVE)) {
shareDialog.show(content, ShareDialog.Mode.NATIVE);
} else {
showError(R.string.native_share_error);
}
} else {
ShareContent content = getThrowActionContent();
if (shareDialog.canShow(content, ShareDialog.Mode.NATIVE)) {
shareDialog.show(content, ShareDialog.Mode.NATIVE);
} else {
showError(R.string.native_share_error);
}
}
}
it's tip Native sharing requires the Facebook for Android application.
so why??
No need, default mode of ShareDialog will be Mode.AUTOMATIC. It will check if the facebook app is installed.
I just bought "Google In App Billing Plugin for unity3d" by Prime31. I don't understand how to use it in the game that I want to develop.
Can you please show me a code example? I understand I need to use my app key, but I don't know what to do next.
And how to a I make test purchases with this plugin?
Please help as much as you can, because I am really stuck on this subject for quite a while.
This is some of my "purchase maker" object called MoneyTakerScript (inherits from MonoBehaviour):
void Start()
{
string key = "My App Key...";
GoogleIAB.init(key);
var skus = new string[] { "cl.48931", "tp.58932", "mmm.68393" };
GoogleIAB.queryInventory( skus );
TPS = GameObject.Find("TPBtn").GetComponent(typeof(TPScript)) as TPScript;
CLS = GameObject.Find("CLBtn").GetComponent(typeof(CLScript)) as CLScript;
MMM = GameObject.Find("MMBtn").GetComponent(typeof(MMMScript)) as MMMScript;
}
public void Purchase(string ProductId)
{
GoogleIAB.purchaseProduct(ProductId);
}
public void UseProduct(string ProductId)
{
if (ProductId.Contains("cl"))
{
CLS.MakeCL();
}
if (ProductId.Contains("tp"))
{
TPS.MakeTP();
}
if (ProductId.Contains("mmm"))
{
MMM.MakeMMM();
}
GoogleIAB.consumeProduct(ProductId);
}
And this is some of my "purchase listner" object code:
void purchaseSucceededEvent(GooglePurchase purchase)
{
//Debug.Log( "purchaseSucceededEvent: " + purchase );
MoneyScript.UseProduct(purchase.productId);
}
void Start()
{
MoneyScript = GameObject.Find("MoneyTaker").GetComponent(typeof(MoneyTakerScript)) as
MoneyTakerScript;
}
I found the problem and solved it!
This line was missing in my AndroidManifest.Xml for some reason:
<activity android:name="com.prime31.GoogleIABProxyActivity"></activity>
Just added the line and now I have In App Purchase!!