Transform FlutterRenderer or SurfaceTexture - flutter

Question, I am currently struggling a bit with trying to improve the orientation of the Flutter Camera plugin specifically on Android. The problem I am facing is that when I move from portrait to landscape the preview doesn't rotate correctly (the actual captured picture or recorded video is fine). The Camera plugin uses the Android Camera2 API under the hood and researching this topic a bit it seems you can correct the rotation of the preview using the transform method of the TextureView as demonstrated in the Android Camera2Basic example App (see https://github.com/googlearchive/android-Camera2Basic/blob/4cc1c3e219d8d168b7893f7a6b2b348740679e5a/Application/src/main/java/com/example/android/camera2basic/Camera2BasicFragment.java#L740).
Problem is that in the Flutter plugin I don't have access to a TextureView, I only have access to the FlutterRenderer instance and the SurfaceTexture that is created by the FlutterRenderer using the createSurfaceTexture method.
So my question is: is it possible to rotate or transform the SurfaceTexture and if so how would I approach this? Currently I can compensate for the rotation on the Dart/Flutter side by nesting the Texture in a RotatedBox widget. This however will mess up the orientation on the iOS side (meaning I need to use the widget conditionally based on the Platform). It also puts a bit of extra overhead on people that want to directly use the Texture widget in their Apps and instead of the preconfigured widget supplied from the Camera package.
Below is a screenshot demonstrating the behaviour:

Related

Face Detection in Preview Camera Feed on Flutter

How do we "draw a square" on detected faces on camera preview feed in Flutter? Is there a cross platform solution to this?
Flutter provides a Camera Plugin, but is there a way for us to draw a square box detecting faces on the preview feed? Any thoughts on this please?
.
SOMETHING LIKE THIS EXAMPLE CAMERA PREVIEW FEED
Firstly, get the image data. This can be done by either using the camera plugin's output, or even directly communicate with the SurfaceView/TextureView.
Secondly, run face detection algorithm. If you do not need cross-platform, https://medium.flutterdevs.com/face-detection-in-flutter-2af14455b90d?gi=f5ead7c6d7c9 MLKit sounds good. If needing cross-platform, you can use Rust algorithms like https://github.com/atomashpolskiy/rustface and bind Rust code to Flutter via https://github.com/fzyzcjy/flutter_rust_bridge. Or, use C++ face detection algorithms and bind to Flutter (though setup may be a bit harder).
Lastly, once you know the face, draw a box around it. For example, Container widget.

Unity AR UI not showing up

I have created a simple Unity AR Foundation app which places objects on a plane whenever the screen is touched. I would like to add some UI so the user can press a button rather than anywhere on the screen.
I have followed several different tutorials which seem to be doing mostly the same thing. I right-click the Hierarchy -> UI -> Button. I have scaled it so it should fit my mobile screen and anchored it to the center so it should be easy enough to find.
These are the canvas settings:
Might the UI somehow be hidden behind the camera feed from the AR Session Origin -> AR Camera? Am I missing any steps to anchor the UI to the screen?
As you can probably tell, I am very new to Unity but I feel like I have followed the tutorials for creating a UI, but it simply won't show. If you need more information, please just ask and I will provide.
Not sure but sounds like you might need to have that Canvas Scalar to scale with the screen size. Change the UI scale mode to Scale with Screen Size.
I was compiling the wrong scene. I had two very similar scenes, so when I compiled I didn't realize there were no changes and that I was inspecting the entirely wrong scene.
Once I changed to the correct scene the setup above worked as expected.

How to get manual camera focus in Flutter

Currently I'm using this library:
https://pub.dev/packages/camera and it has setFocusMode to either Auto or locked, but I need a way to be able to get manual focus mode for camera, where user can tap in camera feed and the focus should be adjusted accordingly.
How do I go about implementing this in my app?
I found this plugin https://pub.dev/documentation/manual_camera/latest/. Does this work? you could use focus distance. If you could get the distance of the object you could set it that way. It's almost like shooting out a ray in game programming. I don't know if this is possible to do but maybe using the size of the objects in the image you could get the distance. someone else has probably already figured this out.

Is it posibile to change the focus for Camera Module V2?

I am using the camera for reading some text and currently, my images look quite blurry
Is it possible to change the focus of the camera?
I am using
https://www.raspberrypi.org/products/camera-module-v2/
Yes, it's definitely possible, I did it many times. Sometimes in the camera box there is even a specific tool to rotate the lens included (check if you have it, I experienced that it's not always present). If you don't have a tool take thin pliers and rotate the lens, you can look here.

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.