I am currently working on a Video App but I don't know how to make a flutter app that can edit videos like trim them, make them of a specific aspect ratio (like add extra blurred space to fill up), add filters or do other stuff with video
Like I don't even know if these things are possible with flutter, I have no idea what to learn to implement this and googling didn't help either
I know there are libraries for these features but what if I wanna learn to make these stuff on my own because I don't wanna be a dependent developers who depends on other people for libraries
I want to learn to write these on my own
I might not have been able to explain myself through this question, sorry for a vague question
but pls don't close it, I really need to know the answer.
I tried googling, I tried looking up documentation but 'twas not of much help
You must know on which platform you want to have the application
It depends on the platform you choose to learn.
Flutter is good at creating a single code base for multiple platforms at once, but unfortunately there are things it can't do and must be done in the language of the target platform.
If the app is for Android and IOS, it would be better to learn Jetpack Compose and Swift,
Flutter will save you time in the future, but you would have to learn all three including flutter
Related
How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.
I am a graphics/web designer with basic JS/php coding knowledge and I am interested in learning to make interactive walls.
I would like to know from anyone experienced at this.
What tools, languages do you use?
Unity, Flash, Cinder....etc. which makes it easier?
Thanks
If you just want basic interaction, po-motion.com is a really easy place to start. It tracks motion for simple effects like leaves being brushed away or revealing one image under another. It works using blog detection and can be set up with a mac or pc using a USB camera and any display you can connect your computer to. It also supports some versions of the Kinect on Windows.
This would be quite hard to make with "basic knowledge of JS/php". However how I think you would handle this would be to make the application as you would normally, but have it be controlled by touch input. And then your wall would be a touch/pressure controlled. Im not an engineer, just a programmer, so I dont know how you would make the actual wall, but I do know unity has some good syntax for touch input which I have used. This is very broad question, but I would recommend looking into unity's pre-built touch classes and input interpretation.
as you have alluded to in your tags, one solution is to create a gesture-controlled solution with Kinect.
The best starting point for this would be to download the SDK and get the hardware:
http://www.microsoft.com/en-us/kinectforwindows/
The SDK comes with demos and existing working C# code in Kinect Explorer that creates the 'interactive wall' experience (see Controls Basics, documentation here: https://msdn.microsoft.com/en-us/library/dn188701.aspx).
You can practically run the demo and replace the images to get your experience started. Just make sure you have the right specs on your machine (https://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx ), and you have a good screen.
In terms of programming language, there's no better opportunity to learn C# than from these demos :p
I have an interactive flash animation which i need to be able to use on an iPhone, embedded within an iPhone app.
I'm not able to link the animation here but a screenshot of the interface (which is pretty self explanitory) can be found here - http://imgur.com/i4bUgTB
It's a very simple interaction with just dragging answers into boxes and checking whether the answers are correct. My question is what would the easiest/best way to convert this to something that can be used on the iPhone?
My initial thoughts would be to use Canvas but i've not looked deeply into what it can do so I was hoping that someone could point me in the right direction.
Thanks for your advice.
Flash embeds logic inside it via scripts. What you are referring to is not an "animation" but a working application.
You should check this link if you use Actionscript 3, I never did that myself.
If the scenario of the app is as you describe, then using your flash sources inside your iOS app shouldn't be too difficult to achieve IMHO.
I want to create a program that stream the screen of my Mac to my iPhone. Kind of like it is done in Liveview. I'm still relatively new to Objective-C, so I don't know where to start to make such an application.
It seems you have to have something installed both on your Mac and on your iPhone, but how would you actually stream the screen of your Mac to your iPhone?
Hope someone can point me in the right direction.
Update of question
Thanks for the answers. Still seems a bit vague to me and I'm not sure I really need full video streaming. Implementing also seems to be a pain, since there aren't any real good resources for it.
Taking a screenshot every second or so and streaming it to my iPhone as an image, would actually be ok. I've figured out how to stream an image with Bonjour from my Mac to my iPhone.
The screenshot I need to send to my iPhone is of the design that I'm currently working on in photoshop. I've figured out how to take a screenshot and how to get a list of all open windows. But how to make a snapshot of an open PSD-file, I don't know.
Any suggestions on that?
It's a very big subject, so not really something that can be tackled with a simple response. However, I would suggest that one approach would be to write a VNC client for the iPhone. Indeed, this open source exists that's probably worth a look:
http://code.google.com/p/vnsea/
Tim
I would go with the frequent screenshot approach. You would prepare a screenshot of the item you want to transmit and then use some easy library like my DTBonjour to transmit these objects via WiFi to iOS clients.
https://www.cocoanetics.com/2012/11/and-bonjour-to-you-too/
If you were using layer-backing then you could also use the renderLayer... methods which would also include sub-layers.
The most fidelity you'd get from encoding the individual screen shots in a streaming video format, though this is way more work.
This is called RFB (or RDP), and most remote-screen applications use RFB/RDP protocol and libraries which implement it.
I need to develop a very simple iPad application that takes RSS feed of images that will be updated constantly and will display them and you can slide through them. As simple as that.
Is there a way to get basic help on doing this, I am very new to iPhone/iPad development and would like help.
To make the question clearer, I would appreciate code samples (other than the ones displayed on Apple's developers site, tutorials, and guidelines.
Thank you :)
First things first, you need to go through some Cocoa Touch tutorials and learn how all the pieces fit together. ;-)
Then check out some NSURLConnection samples on how to pull data down from the network.