What is the approach for using standard i/o in flutter? I've tried the following code but there is no render and build gets stuck.
import 'dart:io';
void main() {
stdout.writeln("Enter value");
String str = stdin.readLineSync();
// user inputs value
print(str);
}
The documentation says
> To read text synchronously from the command line (the program blocks
> waiting for user to type information):
String inputText = stdin.readLineSync();
But how to appropriately input values? Because flutter run doesn't let me input values.
Actually, what you're doing doesn't require you to use Flutter. You can run it using Dart. The thing is that Flutter is meant for application development for Android, iOS, Web and other such platforms. It isn't created for such applications that you desire.
Related
I have deployed my flutter app and one user says that a certain part is not working. Therefore, I am interested in finding a way to write all output that usually gets dumped to the console to a string. I could then send this string to myself and use it to debug the problem.
Is there a way to subscribe to all console output? I found runZoned(() => ...) but this seems to only collect certain logs and especially no logs from other isolates.
In theory, it's possible to capture all log output and then store it as a variable. In practice, it's a lot of work and will likely require elevated privileges to access the log stream from within the app itself (which might not even be possible on iOS without a rooted device).
However, I propose that you flip the equation - instead of retrieving the log output, capture your logs before they make it to the console in the first place.
This is where packages such as logger shine. By routing all of your print statements through a logging service, you can set up middleware that will capture your output on the way through the pipe. This way, you can store all of the output without having to mess with things like OS permissions and privileges.
final buffer = BufferOutput();
final logger = Logger(output: buffer);
class BufferOutput extends LogOutput {
final lines = <String>[];
#override
void output(OutputEvent event) {
lines.addAll(event.lines);
for (var line in event.lines) {
print(line);
}
}
}
Usage:
logger.v("This is verbose text");
logger.d("This is debug text");
logger.i("This is info text");
logger.w("This is warning text");
logger.e("This is error text");
logger.wtf("This is what-the-fudgestickles text");
print(buffer.lines);
// Output:
// ["This is verbose text","This is debug text","This is info text","This is warning text","This is error text","This is what-the-fudgestickles text"]
Note: This will work if you want to capture normal app logging output. If you want to automatically capture exceptional log output, you are better off using something like Crashlytics or Sentry to capture and collate those error logs for you since, depending on the error, you can't depend on your app code to be able to run after the error happens anyway.
I'm trying to make a flutter app that uses google-mlkit text recognition to extract the text of receipts. I got it working but there are still isues. Some single letters don't get recognized and sometimes even full words or numbers dont get picked up.
I implemented my app following this guide https://blog.codemagic.io/text-recognition-using-firebase-ml-kit-flutter/.
In this picture you can see what i mean that some numbers and text dont get picked up. [1]
[1]: https://i.stack.imgur.com/nR5SP.jpg
Does anyone know what the problem could be? Any suggestions? Thanks in advance for the help and I will list some ways i have tried to fix it.
-Changed the cameracontroller picture resolution from high to max and ultra.
-Changed my dependecy to the newest version.
-Changed to mlkit text recognition v2
-Tried using the google_ml_vision https://pub.dev/packages/google_ml_vision
(Its also not the case that these missing words/numbers dont get marked with a rectangle.)
You can use google_ml_kit package. It works with Google's standalone ML Kit. So no need to register project on firebase. It is a recommended package for standalone ml kit as firebase_ml_vission package is discontinued.
Recently google_ml_kit package is split into a set of packages. For text recognition, google_mlkit_text_recognition package is created.
For text recognition, you can use below code,
final textRecognizer = TextRecognizer();
final RecognizedText recognizedText = await textRecognizer.processImage(inputImage);
String text = recognizedText.text;
for (TextBlock block in recognizedText.blocks) {
final Rect rect = block.rect;
final List<Offset> cornerPoints = block.cornerPoints;
final String text = block.text;
final List<String> languages = block.recognizedLanguages;
for (TextLine line in block.lines) {
// Same getters as TextBlock
for (TextElement element in line.elements) {
// Same getters as TextBlock
}
}
}
To know, how to add text recognition using google_ml_kit, you can refer this link.
I writing my code within a Jupyter notebook in VS Code. I am hoping to play some of the audio within my data set. However, when I execute the cell, the console reports no errors, produces the widget, but the widget displays 0:00 / 0:00 (see below), indicating there is no sound to play.
Below, I have listed two ways to reproduce the error.
I have acquired data from the hub data store. Looking specifically at the spoken MNIST data set, I cannot get the data from the audio tensor to play
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()
display(Audio(data=sample, rate = 8000, autoplay=True))
The second example is a test (copied from another post) that I ran to see if it was something wrong with the data or something wrong with my console, environment, etc.
# Same imports as shown above
# Toy Function to play beats in notebook
def beat_freq(f1=220.0, f2=224.0):
max_time = 5
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
I believe that if it is something wrong with the data (this is a well-known data set so, I doubt it), then only the second one will play. If it is something to do with the IDE or something else, then neither will work, as is the case now.
Apologies for the late reply! In the future, please tag the questions with activeloop so it's easier to sort through (or hit us up directly in community slack -> slack.activeloop.ai).
Regarding the Free Spoken Digit Dataset, I managed to track the error with your usage of activeloop hub and audio display.
adding [:,0] to 9th line will help fixing display on Colab as Audio expects one-dimensional data
%matplotlib inline
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()[:,0]
display(Audio(data=sample, rate = 8000, autoplay=True))
(When we uploaded the dataset, we decided to upload the audio as (N,C) where C is the number of channels, which happens to be 1 for the particular dataset. The added dimension wasn't added automatically)
Regarding the VScode... the audio, unfortunately, would still not work (not because of us, but VScode), but you can still try visualizing Free Spoken Digit Dataset (you can play the music there, too). Hopefully this addresses your needs!
Let us know if you have further questions.
Mikayel from Activeloop
I'm getting List<int> as RAW data from recorder plugin for Android and iOS, I want to display Actual text from the bytes, The data is stream of system mic.
Any way to get text from bytes?
Raw data detail:
SampleRate: 44100,
ChannelConfig: MONO-16,
AudioSource: SYSTEM-MIC
Note: I'm already using SpeechToText plugin and aware about it, But I fill that at some-point which is dropping words hence I want to try something else.
Any help will be appreciated.
import 'package:mime/mime.dart';
String? lookupMimeType(String path, {List<int>? headerBytes})
You can use ı think.
var mimeString = lookupMimeType();İf you fill the parameter it works. You can use directly your audio file's path.
was just wondering if I can determine if my app currently runs in a Testing environment.
Reason is that I am running automated screenshots and want to hide/modify parts of my App only when running that UI Test.
For example I'd like to skip registering for push notifications to avoid that iOS Popup at launch.
I'm searching for something like
if (kTestingMode) { ... }
I know that we do have a driver that basically launches the app and then connects. Guess the App actually does not even know if it is running in Testmode or not. But maybe someone knows an answer.
Thanks!
Some answers aim to detect if you are in debug mode. The question was about how to detect if you are in a test environment, not if you are in debug mode. In fact, when you run a test, you are in debug mode, but you can run an app in debug mode even without running a test.
In order to properly detect if you are running a test, you can check for the presence of the key FLUTTER_TEST in Platform.environment.
import 'dart:io' show Platform;
if (Platform.environment.containsKey('FLUTTER_TEST')) { ... }
Another solutions is to use --dart-define build environment variable. It is available from Flutter 1.17
Example of running tests with --dart-define:
flutter drive --dart-define=testing_mode=true --target=test_driver/main.dart
In code you can check this environment variable with the following code:
const bool.fromEnvironment('testing_mode', defaultValue: false)
Not using const can lead to the variable not being read on mobile, see here.
Okay I just found a solution my myself.
What I did is introduce a global variable which I will set in my main driver.
First I created a new globals.dart file:
library my_prj.globals;
bool testingActive = false;
Then, in my test_driver/main.dart file I import that and set the testingActive variable to true
import '../lib/globals.dart' as globals;
..
void main() {
final DataHandler handler = (_) async {
final response = {};
return Future.value(c.jsonEncode(response));
};
// Enable integration testing with the Flutter Driver extension.
// See https://flutter.io/testing/ for more info.
enableFlutterDriverExtension(handler: handler);
globals.testingActive = true;
WidgetsApp.debugAllowBannerOverride = false; // remove debug banner
runApp(App());
}
Now, I do have this global variable everywhere in my Flutter App by simply importing and checking.
e.g. in my app.dart
import '../globals.dart' as globals;
...
if (globals.testingActive) {
print("We are in a testing environment!");
}
Is there a better solution? Guess this works just fine!
I have another solution for this, may be this would work out as well for you. Let me know if that goes well with you or not.
1. So, I am suggesting to use assert(), as it only runs on debug mode.
Here is an example for navigator:
assert(() {
if (navigator == null && !nullOk) {
throw new FlutterError(
'Error!!!'
);
}
return true;
}());
Note: In particular the () at the end of the call - assert can only operate on a boolean, so just passing in a function doesn't work.
2. Other way is to use kReleaseMode from package package:flutter/foundation.dart
kReleaseMode is a constant. Therefore the compiler is correctly able to remove unused code, and we can safely do:
import 'package:flutter/foundation.dart' as Foundation;
//is release mode
if (Foundation.kReleaseMode) {
print('release mode');
} else {
print('debug mode');
}
3. This is the snippet which will be helpful for you:
bool get isInDebugMode {
bool inDebugMode = false;
assert(inDebugMode = true);
return inDebugMode;
}
If not you can configure your IDE to launch a different main.dart in debug mode where you can set a boolean.