Right now I am working on flutter application in which I want to add the functionality of taking 3D scans of the anything from camera.
What I want is to open the 3D scan photosphere and take that scans or download these scans to the gallery and all this will be done by my application.
If any one give me the Idea or solution. Thanks in advance.
Related
I have trained a model which takes a pointcloud as input and outputs the bounding boxes on Images on google colab.
Now, I want to build a flutter app where the inputs(pointcloud) will be uploaded and gets the output image as result.
I have explored the use of tflite,but it doesn't support some of the tensorflow operations.
I can think of two solutions.
1)I wish I could connect my flutter app to google colab,run the cells and save the output to my drive from where I could return it to my flutter app. Is this possible? If so,how can I do it?
Using Google Cloud Platform. There are wide range of tools available in GCP and I have no idea which one to use. Can I know the possible and easy to implement ways?
Kindly share your thoughts on How I need to proceed.
I am using camera plugin https://pub.dev/packages/camera I want to take image stream from it and use it to identify the barcode using Firebase ml kit. Can I do that? I can't figure out how to take image stream from camera plugin and how to use this stream with ml kit. Can anybody help me? I am creating a live reader, I don't want to take picture using the camera plugin.
There are some already built packages that you can use and enhance based on your use case like barcode_scan.
However, I would encourage you to try a few things first and then turn to the community for help with the code that you have tried but didn't work.
I am totally new to flutter.
I want to build a flutter application where 2 people can communicate with video/voice and each of them should be able to draw on the other persons video screen.
The communication would happen in a split screen where both of them can see their own video and other persons video.
Let's say one person decide to draw a mustache on the other persons face. Both of them would see it on the respective screens.
Is there an existing flutter plugin I can use for this ?
I would appreciate any help you can give me.
I am going to build the same thing with flutter
I researched a lot.
And found some dependency like agora rtc web rtc for flutter.
If you want to develop together then mail me on tushark690#gmail.com
Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0
I'm new to the Facebook AR studio and wanted to know if I can add entire songs and videos to a marker. And if that marker can be a simple logo or graphic.
AR Studio does not allow markers at this moment. As far as I know there is no information about when this feature will be available.
Regarding playing songs and videos, the only limitation I think you will find is the size of the effect. Facebook asks developers to limit the effect file size to 2MB.