I have noticed on some phones that it takes quite a lot of time (sometimes even up to 10 seconds) from the moment Camera.takePicture(null, null, JpegCallback) is called to the moment onPictureTaken in picture callback is raised. Samsung Galaxy S5 seems to be the worst.
This happens when light conditions are bad and picture stabilization kicks in (http://www.androidcentral.com/samsung-galaxy-s5-camera-tip-turn-picture-stabilization-unless-you-really-need-it). This picture stabilization takes extremely long time sometimes.
I was searching the internet on how to programmatically disable it, but I could only find the setVideoStabilization() method which handles video stabilization, not picture stabilization.
Does anyone know how to programmatically disable problematic picture stabilization before calling the Camera.takePicture method?
Related
I am building an Unity application as a visual stimulation for a neuroscience study. Basically, my application just shows several flickering planes periodically. However, I noticed that every time after I ran the application for a few minutes, the fps dropped to around 10~30 shown in the profiler. The drop of fps happened in both editor mode or build. Since my script is totally in cycle, I am guessing there were some accumulative issues like GC or memory leak?
Also, I noticed that when the fps starts to become unstable, I can click the Pause button in the tool bar of the Editor and then resume the application, the fps will become stable for a further period. Therefore, I am wondering what actually happens when the Pause button is clicked? Does pressing the Pause button clear or reset anything so the fps can go back to full?
When I had a similar problem, it helped just turning off "Record" in the Unity profiler. Recording consumes so much memory and the fps drop, especially when there are a lot of function calls (like deep recursions).
It doesn't sound like you are constantly creating new GameObjects and not deleting the old ones, but that would be the 2nd thing that comes to mind.
If you use lot of drag-drop references or in short form [SerializeFields], you will lose performance resulting reduced FPS. Try to get everything you want in Awake, specially MonoBehaviours, this worked for me to increase FPS. By everything I didn't mean you have to keep things hard coded everywhere. Just the classes and other things like gameObjects. Floats, int, bools,strings, lists<> should be okay and better to be serialized.
I have some related question re: video processing on iOS.
1) Is it possible to pause, then resume, an AVVideoComposition processing session? I ask this because I've run into problems merging large 1080p video files together when my app is running in the background (i.e., the 10-minute background processing limit for apps). My experience so far has been that the video processing session fails if the app is in the background when the 10-minute limit is hit and the app gets stalled by iOS. If I were able to pause the video processing session, I could detect the approach of the 10-minute limit, then pause the video processing until the user brings the app back to the forefront, then resume it, etc., etc.
2) In lieu of the above capability, is there a way to "extend" background processing for longer than 10 minutes? My app does not fit under any of the categories of apps that are permitted to run continuously in the background. But my need to do so for my app is legitimate and genuine - the reason being that videos on the iphone are huge - 1080p with the newer iphones - and processing them takes a while. In a way, it's not fair to users to limit an app's ability to process video taken on their behalf on the device the video was captured with. Bottom line: an app should be allowed to either continue it to it's completions, or pause and resume at a later time, as long as it's in the middle of an AVComposition video processing session.
3) If neither of the above options is available, could you please recommend a strategy for dealing with this issue. I want to make my app rock-solid in terms of reliability and letting the user know exactly what is going on.
4) Is it possible to up-convert a lower quality video to a higher (albeit fuzzier) quality? For example: say I want to merge two video clips, one a lower quality and one a higher quality. Currently, if I merge these two together, the resulting video plays at the resolution of the higher-quality video; the portion containing the lower-quality video plays in a smaller video frame in the upper-left corner of the larger video frame, with the rest of the frame blacked out. What I'd prefer is to have the lower-quality video fill the larger frame, even if it gets fuzzy as a result. (NOTE: I do want to maintain the aspect ratio of the lower quality video, however, so black bands across top/bottom or left/right are OK.)
Thanks!
I am building an app for iOS (iPhone and iPad) where the user is able to watch video clips of therapeutic exercises. However, I want to overlay some dynamically generated information (the amount of reps and sets assigned to them by their physio) either "over" or next to the video playing. The amount of reps and sets will be like a counter counting down the amount of work they have left before the next exercise will start playing.
Here is a mock-up of what I would like to achieve if possible Video with dynamic information overlay
So while the video is playing the "Hold" will count for a specified number of seconds. When the time limit is reached, "Sets" is increased by 1 and Hold starts from 0 again. When the Sets are all completed, "Reps" increase by 1 and Hold and Reps start back at 0. Etc.
Can the video playing and all this information be displayed simultaneously on the iPhone/iPad?
I have looked at a number of video hosting solutions that might have this feature built into it, but couldnt find anything that would suit my needs.
Is this possible at all as I have never seen anything like this done before?
Could a solution be to use a iFrame to display the video and then have all the other information that I want on the screen separate to this? Just a thought...
Yes, this is possible, have a look at this example project:
http://www.musicalgeometry.com/?p=1273
This is for a camera overlay, but it also works for existing videos.
I have developed a test for iPod/iPhone (with MonoTouch if that is relevant) that measures reaction time. But I need to take into consideration the time between touching the screen and actual triggering of the button event. Is there any documentation of that?
It's already very hard to almost impossible to get predictable interrupt latency on real time operating systems.
But on the iPhone? Imho impossible. A capacitive touchscreen is not optimal to get results that are exactly the same for each body and location. And if mail.app decides to poll for emails just at the moment you'll touch the screen there will be a bigger delay.
But to make one thing clear, we are speaking about some micro seconds or even less than that.
If you want accurate results you shouldn't use an iPhone. But I guess your app will be some kind of game, so nobody cares if your result is 0.01 seconds off. But I wouldn't show results as 0.381829191 seconds, that fakes accuracy you'll never get on any smartphone.
What is the lowest reaction time you got in your app?
The time between an actual touch and the system registering it will be negligable.
One key thing: if you are detecting the press using touch events like touchUpInside, consider using the touchesDownInside event because touchesUpInside, will not fire until the user's finger leaves the screen.
I have a camera application that uses my custom overlay on the UIImagePickerController object.
I am calling the takePicture method to take a picture when the user presses a button on my overlay view. Something like:
[imagePicker takePicture];
[self showProcessingIndicator];
The processing indicator is the usual spinning wheel that shows that a picture is being taken. I notice that often the camera does not take a picture immediately after the takePicture method is called, and the processing indicator is showing.
It seems that the camera tries to adjust its focus (if it is out of focus) and then takes a picture. This is probably the right thing to do. However, I have also noticed delay in taking a picture even when the camera is focused correctly and does not change its focus. This does not happen every time, and its hard to say when exactly it happens.
My question is: is there a way to force the camera to take a picture instantly, ignoring everything else? Also, is it possible that subsequent processing (showing the indicator view, for example) is causing the camera to respond slower on occasion?
Thanks!
I have also seen this and came to the conclusion that taking a picture is a reasonably resource-hungry operation in terms of talking to the camera device, allocating/moving memory, etc. While you can tune your application to not soak up any resources while this piece is running, you can't tell MobileMail, MobileiTunes, etc, to not check for email, etc, at that precise moment.
Is there any particular iOS version or device that this happens on more than others? Taking a picture on my iPhone 3G with iOS 4.0.x took up to 30 seconds, but is much improved on iPhone 4.
The activity indicator will soak up some resources, so this may be a candidate for removal and maybe just use sound instead. Test to be sure.