Is there a way or package that can help either taking fullscreen screenshot, or screenshot of widget that is wrapped or at least sharing the picture of screen via native share option?
There are some packages and I have tried, did not found any useful one.
RepaintBoundary is the Widget you're looking for, this one can be converted into an image.
Example:
Future<CaptureResult> captureImage() async {
final pixelRatio = MediaQuery.of(context).devicePixelRatio;
final boundary = _boundaryKey.currentContext.findRenderObject() as RenderRepaintBoundary;
final image = await boundary.toImage(pixelRatio: pixelRatio);
final data = await image.toByteData(format: ui.ImageByteFormat.png);
return CaptureResult(data.buffer.asUint8List(), image.width, image.height);
}
final _boundaryKey = GlobalKey();
RepaintBoundary(
key: _boundaryKey,
child: Container(),// Your Widgets to be captured.
)
Link: capture_widget.dart
Related
I have a method in my cubit that captures a widget as an image and I am testing this method. Long story short this method calls _captureFromWidget (see below, the code is copied from package screenshot), which accepts the Widget and returns a Uint8List. (I am not testing that the package works correctly, I am testing that my method and its parameters work correctly).
The problem is, that in real app the widget is captured correctly, but in the test, the font of all the Text widgets is not rendered correctly, and boxes are shown instead of letters.
I know the reason of this, see here, and here.
I tried loading the font as they suggested:
in pubspec.yaml:
assets:
- assets/fonts/
and I have in the assets/fonts folder my font: AbrilFatface-Regular.ttf
in my test:
final Future<ByteData> abrilFatFaceFontData = rootBundle.load('assets/fonts/AbrilFatface-Regular.ttf');
final FontLoader fontLoader = FontLoader('AbrilFatFace-Regular')..addFont(abrilFatFaceFontData);
await fontLoader.load();
Text text = ...; // text that uses custom font from above
cubit.captureWidget(...); // will invoke _captureFromWidget
but still, boxes are shown instead of letters in the captured image:
this is the result of calling the same methods with the same arguments from the real app:
So how to provide the font to the test correctly?
Here is the code that captures the widget:
/// [context] parameter is used to Inherit App Theme and MediaQuery data.
Future<Uint8List> _captureFromWidget(
widgets.Widget widget, {
required Duration delay,
double? pixelRatio,
widgets.BuildContext? context,
}) async {
// Retry counter
int retryCounter = 3;
bool isDirty = false;
widgets.Widget child = widget;
if (context != null) {
// Inherit Theme and MediaQuery of app
child = widgets.InheritedTheme.captureAll(
context,
widgets.MediaQuery(data: widgets.MediaQuery.of(context), child: child),
);
}
final RenderRepaintBoundary repaintBoundary = RenderRepaintBoundary();
Size logicalSize = ui.window.physicalSize / ui.window.devicePixelRatio;
Size imageSize = ui.window.physicalSize;
assert(logicalSize.aspectRatio.toPrecision(5) == imageSize.aspectRatio.toPrecision(5));
final RenderView renderView = RenderView(
window: ui.window,
child: RenderPositionedBox(alignment: Alignment.center, child: repaintBoundary),
configuration: ViewConfiguration(
size: logicalSize,
devicePixelRatio: pixelRatio ?? 1.0,
),
);
final PipelineOwner pipelineOwner = PipelineOwner();
final widgets.BuildOwner buildOwner = widgets.BuildOwner(
focusManager: widgets.FocusManager(),
onBuildScheduled: () {
///
///current render is dirty, mark it.
///
isDirty = true;
});
pipelineOwner.rootNode = renderView;
renderView.prepareInitialFrame();
final widgets.RenderObjectToWidgetElement<RenderBox> rootElement = widgets.RenderObjectToWidgetAdapter<RenderBox>(
container: repaintBoundary,
child: widgets.Directionality(
textDirection: TextDirection.ltr,
child: child,
)).attachToRenderTree(
buildOwner,
);
// Render Widget
buildOwner.buildScope(
rootElement,
);
buildOwner.finalizeTree();
pipelineOwner.flushLayout();
pipelineOwner.flushCompositingBits();
pipelineOwner.flushPaint();
ui.Image? image;
do {
// Reset the dirty flag
isDirty = false;
image = await repaintBoundary.toImage(pixelRatio: pixelRatio ?? (imageSize.width / logicalSize.width));
// This delay should increase with Widget tree Size
await Future.delayed(delay);
// Check does this require rebuild
if (isDirty) {
// Previous capture has been updated, re-render again.
buildOwner.buildScope(
rootElement,
);
buildOwner.finalizeTree();
pipelineOwner.flushLayout();
pipelineOwner.flushCompositingBits();
pipelineOwner.flushPaint();
}
retryCounter--;
//retry until capture is successful
} while (isDirty && retryCounter >= 0);
final ByteData? byteData = await image.toByteData(format: ui.ImageByteFormat.png);
return byteData!.buffer.asUint8List();
}
Turns out I was doing everything correctly except one thing.
The line final FontLoader fontLoader = FontLoader('AbrilFatFace-Regular')..addFont(abrilFatFaceFontData); will determine the fontFamily name that must be used in the TextStyle of the Text element.
and since I was using AbrilFatFace as fontFamily, the test was not working.
So the solution is to use in the fontFamily, the same name passed to the FontLoader, since thats the name the font will be identified with.
I have a widget AvatarImage which renders a CircleAvatar if an image is provided or a default image DefaultAvatar otherwise.
Widget implementation looks like:
class AvatarImage extends StatelessWidget {
final ImageProvider? image;
const AvatarImage({this.image}): super();
Widget build(BuildContext context) {#
if (image == null) return DefaultAvatar();
return CicrcleAvatar(
image: image,
)
}
}
Now I wanted to test that image is actually rendered using the following test case:
testWidgets("Renders image avatar when image is provided", (WidgetTester tester) async {
mockNetworkImagesFor(() async {
final testNetworkImage = CachedNetworkImageProvider("https://placebear.com/100/100");
await tester.pumpWidget(TestApp(
child: AvatarImage(image: testNetworkImage)
));
await tester.pumpAndSettle();
final image = find.image(testNetworkImage);
debugDumpApp();
expect(image, findsOneWidget);
expect((image as Image).image, equals(testNetworkImage));
});
});
But it is unable to find the image. I have tried using find.byType(CachedNetworkImage) but this also has no effect the test is always failing because the image cannot be found.
I have verified that the widget is present in tree using debugDumpApp.
Why doesnt find find this image?
How can I find it without using keys?
I am trying to export a Widget as Image, that is wider than the actual Viewport.
I have found this method:
Future<Uint8List?> createImageFromWidget(Widget widget,
{Size? logicalSize, Size? imageSize}) async {
final repaintBoundary = RenderRepaintBoundary();
logicalSize ??= ui.window.physicalSize / ui.window.devicePixelRatio;
imageSize ??= ui.window.physicalSize;
assert(logicalSize.aspectRatio == imageSize.aspectRatio);
final renderView = RenderView(
window: ui.window,
child: RenderPositionedBox(child: repaintBoundary),
configuration: ViewConfiguration(
size: logicalSize,
),
);
final pipelineOwner = PipelineOwner();
final buildOwner = BuildOwner(focusManager: FocusManager());
pipelineOwner.rootNode = renderView;
renderView.prepareInitialFrame();
final rootElement = RenderObjectToWidgetAdapter<RenderBox>(
container: repaintBoundary,
child: widget,
).attachToRenderTree(buildOwner);
buildOwner.buildScope(rootElement);
buildOwner.finalizeTree();
pipelineOwner.flushLayout();
pipelineOwner.flushCompositingBits();
pipelineOwner.flushPaint();
final image = await repaintBoundary.toImage(
pixelRatio: imageSize.width / logicalSize.width);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
if (byteData == null) return null;
return byteData.buffer.asUint8List();
}
From what I understand, this method builds the referenced Widget in a new Offscreen-Widgettree and sets the viewport size to the size of the Widget, that needs to get exported.
This method in and of itself works, but heres my problem:
I am referencing the Widget that needs to get exported by GlobalKey.
So it looks something like this:
Repaintboundary(
key: myGlobalKey,
child : WidgetIWantToExport()
);
void exportWidget(){
final currentWidgetSize = myGlobalKey.currentContext!.size!;
final keyWidget = myGlobalKey.currentWidget!;
final pngBytes = await createImageFromWidget(
keyWidget,
logicalSize: Size(currentWidgetSize.width, currentWidgetSize.height),
imageSize: Size(currentWidgetSize.width, currentWidgetSize.height),
);
//exporting bytes
}
So when I run this method, the .attachToRenderTree() throws an Error: '_elements.contains(element)': is not true. ... Duplicate GlobalKey detected in widget tree.
So I am assuming the problem is, that it tries to build a widget with the same GobalKey in the new RenderTree, which throws this error.
How can I fix this?
I solved this by accessing the child of the Repaintboundary, that I am referencing by Key.
The code looks like this:
/// Creates an image from the given widget by first spinning up a element and render tree,
/// and then creating an image via a [RepaintBoundary].
///
/// The final image will be of size [exportViewportSize].
/// If no [exportViewportSize] is supplied, the image will be of size [logicalSize], which has a fallback value.
Future<Uint8List?> createImageFromWidget(Widget widget,
{Size? logicalSize, Size? imageSize, Size? exportViewportSize}) async {
final renderRepaintBoundary = RenderRepaintBoundary();
final repaintBoundary = widget as RepaintBoundary; //casting the widget as RepaintBoundary
final renderWidget = repaintBoundary.child!; //Accessing its child widget
logicalSize ??= ui.window.physicalSize / ui.window.devicePixelRatio;
imageSize ??= ui.window.physicalSize;
assert(logicalSize.aspectRatio == imageSize.aspectRatio);
final renderView = RenderView(
window: ui.window,
child: RenderPositionedBox(child: renderRepaintBoundary),
configuration: ViewConfiguration(
size: exportViewportSize ?? logicalSize,
),
);
final pipelineOwner = PipelineOwner();
final buildOwner = BuildOwner(focusManager: FocusManager());
pipelineOwner.rootNode = renderView;
renderView.prepareInitialFrame();
final rootElement = RenderObjectToWidgetAdapter<RenderBox>(
container: renderRepaintBoundary,
child: MaterialApp(
home: renderWidget,
),
).attachToRenderTree(buildOwner);
buildOwner.buildScope(rootElement);
buildOwner.finalizeTree();
pipelineOwner.flushLayout();
pipelineOwner.flushCompositingBits();
pipelineOwner.flushPaint();
final image = await renderRepaintBoundary.toImage(
pixelRatio: imageSize.width / logicalSize.width);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
if (byteData == null) return null;
return byteData.buffer.asUint8List();
}
The bounty expires in 4 days. Answers to this question are eligible for a +50 reputation bounty.
Danny is looking for a canonical answer:
Is it possible to save a flutter canvas to a gif or a video? Is it possible to directly convert the canvas to a video with ffmpeg?
Thanks
Can we export custom painter used animation controller to a gif image or continuous images or event a video such as mp4 file?
Yes I did it one time (2 years ago) and I converted a Flutter Animation to a mp4 file. unfortunately I couldn't find the code. please follow the steps to make what you want.
capture your widget with RenderRepaintBoundary
https://api.flutter.dev/flutter/rendering/RenderRepaintBoundary/toImage.html
class PngHome extends StatefulWidget {
const PngHome({super.key});
#override
State<PngHome> createState() => _PngHomeState();
}
class _PngHomeState extends State<PngHome> {
GlobalKey globalKey = GlobalKey();
Future<void> _capturePng() async {
final RenderRepaintBoundary boundary = globalKey.currentContext!.findRenderObject()! as RenderRepaintBoundary;
final ui.Image image = await boundary.toImage();
final ByteData? byteData = await image.toByteData(format: ui.ImageByteFormat.png);
final Uint8List pngBytes = byteData!.buffer.asUint8List();
print(pngBytes);
}
#override
Widget build(BuildContext context) {
return RepaintBoundary(
key: globalKey,
child: Center(
child: TextButton(
onPressed: _capturePng,
child: const Text('Hello World', textDirection: TextDirection.ltr),
),
),
);
}
}
you need to capture each frame of your Animation and save it to a directory. with special naming for example (1.png,2.png .... 1000.png)
import 'package:path_provider/path_provider.dart';
import 'dart:io';
Uint8List imageInUnit8List = // store unit8List image here ;
final tempDir = await getTemporaryDirectory();
File file = await File('${tempDir.path}/image.png').create();
file.writeAsBytesSync(imageInUnit8List);
install ffmpeg https://pub.dev/packages/ffmpeg_kit_flutter and use it to execute FFMPEG command
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
FFmpegKit.execute('your command').then((session) async {
final returnCode = await session.getReturnCode();
if (ReturnCode.isSuccess(returnCode)) {
// SUCCESS
} else if (ReturnCode.isCancel(returnCode)) {
// CANCEL
} else {
// ERROR
}
});
search for a command to convert your images with ffmpeg to Gif Or Mp4 (some thing like these Example1 or Example2)
you can use screenshot library. by wrapping the parent container with Screenshot library you can convert widget to multiple images and those images can be converted to a gif but I think it is tricky, not efficient, and difficult to implement. you can give it a try.
I want to print an image in flutter. I use the package "printing 3.5.0". My problem is now I want use the hole space of the page. But when I put the orientation or size, both parameters are ignored.
_printImage(GlobalKey globalKey) async {
final pdf = new PdfDocument();
final page = new PdfPage(pdf, pageFormat: PdfPageFormat.a4);
final doc = pw.Document();
final g = page.getGraphics();
RenderRepaintBoundary boundary =
globalKey.currentContext.findRenderObject();
ui.Image _image = await boundary.toImage();
var bytes = await _image.toByteData(format: ui.ImageByteFormat.rawRgba);
PdfImage image = new PdfImage(pdf,
image: bytes.buffer.asUint8List(),
width: _image.width,
height: _image.height);
doc.addPage(pw.Page(build: (pw.Context context) {
return pw.Center(
child: pw.Image(image),
); // Center
}));
g.drawImage(image, 20.0, 0.0);
Printing.layoutPdf(onLayout: (pageFormat) {
return pdf.save();
});
}
Even if I adjust the width and height of the image, the whole page is not used.
Does not exist a parameter that automatically scales the image on the whole page and observes the orientation?
I'm grateful for any help.