I want to stitch images together to make a spherical panorama in an iOS App. I tried doing it with OpenCV but that turned out to be a waste of time since it almost always crashes when I try to stitch photos of the ceiling or the floor. Also, it takes up a lot of cpu memory.
I just discovered upon going through Apple documentation that Apple Vision has image registration capabilities. Upon spending hours and hours I couldn't figure out how to use it though. The documentation is terrible and there are no usage examples at all whatsoever.
All I really need is a tutorial or a demo or a function that stitches two or more images and I can make my way from there. Any help will be extremely appreciated since my job depends on it.
Related
I am trying to use the official flutter camera package but the take picture method take a lot of time (like 3-5 seconds) regardless of the resolution picked.
Is there a way to speed it up (I am using a Pixel 5 as a development device)? currently I show a message to the user to hold still while it's taking the picture but it feels like a bad UX.
Edit: I downgraded the picture format to jpeg instead of yuv420 and it is slightly quicker.
This is a known issue with the official camera package:
https://github.com/flutter/flutter/issues/84957
You can give it a thumbs up to give it more attention.
You can also try using the CamerAwesome package instead. It does not have as many features as the official package, but in my testing, it takes pictures pretty much instantly.
CamerAwesome: https://pub.dev/packages/camerawesome
Hello I'm a new developer, I have a spider solitaire app on the App Store. When I profile my app's GPU usage the profiler tells me that its usage is "very high." I've tried to cobble a solution together by researching, off and on, for months on the internet, but haven't found it yet.
The spider solitaire variant of solitaire uses 104 cards which could potentially be on the scene at the same time and they are all PNGs. I'm pretty sure, from researching, that the pngs are the reason for the performance hit.
The reason I want to use PNGs over less resource intensive file formats is to replicate the rounded corners of a playing card. I've tried using shape masks but that just made performance worse. Other than that, I have seen reference to "tiling" on SO, but haven't been able to parse that information.
I've sized the card images correctly. I'm getting 99%, or so, 60fps in my game, but of course, I am concerned about the energy consumption. I should mention that the game is 2D SpriteKit.
Is this just a "suck it up, you made a design decision that is costly" type situation? Or is there a better way? Is caching a thing, weak references maybe? Please, be nice, I'm not very experienced with this, but I love it.
Thank you very much to anyone who can provide any resources!
Me and a friend are trying to develop an app with a very unique and strange concept. I have a rubiks cube, where all the stickers have been replaced with QR Codes,
So that to the naked eye, they are impossible to distinguish from each other, but through a QR-Code reader (and a fair bit of patience) you can scan one and one sticker to reveal which color it in fact is, and use this system to ultimately solve the rubiks cube. The idea was widely received in the cubing community, and I am now looking to expand on that idea by making an app that recognizes each sticker, tracks them using a tracking plugin, and virtually colors in that sticker, so that through the lens of my smartphone, the stickers have colors. I have so far made a simple version of the app, using the Vuforia tracking plugin, and it's produced really promising results Screenshot from said app
My problem though, is that Vuforia only allows 2-3 stickers to be tracked and painted at a time, and while I've had it read 6 stickers before, that requiered me to hold the cube very still. I am developing a brand new set of stickers that isn't QR Codes, but rather random patterns that hopefully will be easier for the app to recognize, but what I really need is a tracking plugin that allows more stickers to be tracked at once. Vuforia is, to my understanding, the best free tracking plugin out there, but are there any other tracking plugins that can produce the results I'm seeking? Optimally, I would like to track 27 stickers at once, as that's the maximum amount of stickers visible on a rubiks cube at a time. but if that's too much, 9 stickers, which is the amount of stickers for each side, would do just fine too. Can anyone with more knowledge to tracking and extended tracking help me? I didn't make this app, only the stickers, cube and concept, so I don't know how to use the plugins. Would it be possible to use several plugins at once? I am willing to pay for the license for a better plugin if that's what it takes. If there are any better ways to approach my idea, I am also very open to hear them. Thank you very much for your time
I want to film a batter swinging at a baseball, but the bat is blurry. The video is 30 fps.
Through research I have found that deconvolution seems to be the way to minimize motion blur, but I have no idea if or how I can implement it in my iOS app post processing.
I was hoping someone could point me in the right direction like how to apply a deconvolution algorithm in iOS or what I might need to do...or if it is even possible. I imagine it takes some processing power.
Any suggestions at all are welcome...
Thanks, this is driving me crazy...
After a lot of research and talks with developers about deconvolusion on iOS (Thanks to Brad Larson for taking the time to give me detailed information) I am confident that it is not possible and/or not worth the time. If the hardware can handle the computations (No guarantee) it would be EXTREMELY slow and consume much of the device's battery. I have also been told it could take months to implement the algorithms...if it is possible at all.
Here is the response I received from Apple...
Deconvolution algorithms are generally difficult to implement and can be very computationally intensive. I suggest you starting with a simple sharpening technique. Depending on the amount of the motion blur in your video, it might just suffice.
The sharpen filters, including CISharpenLuminance and CIUnsharpMask, are now available in iOS 6, so it is moderately easy to test them out.
Core Image Filter Reference
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Core Image sample code from this year's WWDC session 511 "Core Image Techniques". It's called "Attempt3". This sample demonstrates best practices for applying CIFilter's to a live video taken by the iPhone/iPad camera. You may download the session video from the following page: https://developer.apple.com/videos/wwdc/2012/.
Just wanted to pass this information along.
I've googled around for this however the only thing i've properly come across is the simple-image-processing library on Google Code, however i think the project is dead! Does anyone know of any libraries/frameworks, even tutorials on image-processing for iPhone apps?
Apple has some sample code here that shows how to do some simple image processing like saturation and hue adjustments.