Is there a better / faster way to process camera images than Quartz 2D? - iphone

I'm currently working on an iphone app that lets the user take a picture with the camera and then process it using Quartz 2D.
With Quartz 2D I transform the context to make the picture appear with the right orientation (scale and translate because it's mirrored) and then I stack a bunch of layers whith blending modes to process the picture.
The initial (and the final result) picture is 3mp or 5mp depending on device and it takes a great amount of memory once drawn. Reminder : it's not a jpeg in memory, it's bitmap data.
My layers are the same size as the initial picture so every time i draw a new layer on top of my picture i need the current picture state in memory (A) + the layer to blend memory space (B) + the space in memory to write the result (C).
When i get the result i ditch "A" and "B", takes "C" to the next stage of processing where it become the new "A"...
I need 4 pass like this to obtain the finale picture.
Giving the resolution of these pictures my memory usage can climb high.
I can see a peek at 14Mo-15Mo and most of the time i only get level 1 warnings but level 2s sometimes wave at me and kill my app.
Am i doing this the right way regarding general process ?
Is there a way to speed up the processing ?
Why oh why memory warnings spawn randomly ?
Why the second picture process is longer than the first as you can see in this pic:

Because the duration looks to be about twice as long; I'd say it was doing twice as much processing. Does the third photo taken take three times as long?
If so, that would seem to indicate it's processing all previous photos / layers taken. Which - of course - is a bug in your code somewhere.

Related

How to mutate `ui.Image` in Flutter, or draw frequently mutated image?

I have a roughly 1000x1000 image, which will be mutated at 60fps, each time changing color of only dozens of pixels.
Then, I need to display this image in Flutter. Of course I can use Image.memory(the_uint8list_data), but it will be extremely costly since it will create 60 new big images in one second. I have also checked ui.Image, but it is opaque and I do not see any handles to change pixel data. I originally used Path-based solutions, but that was also too slow because the mutation can happen for a long time and the Path will be very huge.
The question is, how can I display this image with high performance?

How to getting acquired frames at full speed ? - Image Event Listener does not seem to be executing after every event

My goal is to read out 1 pixel from the GIF camera in VIEW mode (live acquisition) and save it to a file every time the data is updated. The camera is ostensibly updating every 0.0001 seconds, because this is the minimum acquisition time Digital Micrograph lets me select in VIEW mode for this camera.
I can attach an Image Event Listener to the live image of the camera, with the message map (messagemap = "data_changed:MyFunctiontoExecute"), and MyFunctiontoExecute is being successfully ran, giving me a file with numerous pixel values.
However, if I let this event listener run for a second, I only obtain close to 100 pixel values, when I was expecting closer 10,000 (if the live image is being updated every 0.0001 seconds).
Is this because the Live image is not updated as quickly I think?
The event-listener certainly is executed at each event.
However, the live-display of a high-speed camera will near-certainly not update at each acquired-frame. It will either perform some sort of cumulative or sampled display. The exact answer will depend on the exact system you are on and configurations that are made.
It should be noted that super-high frame-rates can usually only be achieved by dedicated firmware and optimized systems. It's unlikely that a "general software approach" - in particular of interpreted and non-compiled code - will be able to provide the necessary speed. This type of approach the problem might be doomed from the start.
(Instead, one will likely have to create a buffer and then set-up the system to acquire data directly into the buffer at highest-possible frame rate. This will be coding the camera-acquisition directly)

Psychopy: delayed picture display in an EEG experiment

I'm running an EEG experiment with pictures and I send trigger over the parallel port. I added triggers to my code via Psychopy builder and synchronized it to screen refresh. I used a photo diode to test if pictures are displayed at exactly same time as the trigger is sent and I find irregular delays: a triggers is sent between 5ms and 26ms earlier than an image is actually displayed.
I don't think that an image size is an issue as I observed the delays even when I replaced pictures with a small-size white image. Moreover, there is an ISI period of half a second before a picture display which should help. I was told by the technicians that the graphic card or a cable should not be an issue. Does anyone have an idea why I get these delays and how it could be solved?
Due to the comments, I'm adding a piece of code that sends a trigger:
# *image_training_port* updates
if t >= 4.0 and image_training_port.status == NOT_STARTED:
# keep track of start time/frame for later
image_training_port.tStart = t # underestimates by a little under one frame
image_training_port.frameNStart = frameN # exact frame index
image_training_port.status = STARTED
win.callOnFlip(image_training_port.setData, int(triggers_image_training))
if image_training_port.status == STARTED and t >= (4.0 + (0.5-win.monitorFramePeriod*0.75)): #most of one frame period left
image_training_port.status = STOPPED
win.callOnFlip(image_training_port.setData, int(0))
Actually, this is most likely due to the monitor itself. Try swapping in a different monitor.
Explanation: Flat panel displays often do some "post-processing" on the frame pixels to make them look prettier (for flat panel TVs almost all of them do this). The post-processing is not only unwanted for the fact that it alters your carefully calibrated stimulus, but also because it can introduce delays if it takes longer than a few ms to perform. PsychoPy (or any software) can't detect this - it can only know about the time the frame was flipped at the level of the graphics card, not what happens after this.

How can I create larger worlds/levels in Unity without adding lag?

How can I scale up the size of my world/level to include more gameobjects without causing lag for the player?
I am creating an asset for the asset store. It is a random procedural world generator. There is only one major problem: world size.
I can't figure out how to scale up the worlds to have more objects/tiles.
I have generated worlds up to 2000x500 tiles, but it lags very badly.
The maximum sized world that will not affect the speed of the game is
around 500x200 tiles.
I have generated worlds of the same size with smaller blocks: 1/4th the size (it doesn't affect how many tiles you can spawn)
I would like to create a world at least the size of 4200x1200 blocks without lag spikes.
I have looked at object pooling (it doesn't seem like it can help me
that much)
I have looked at LoadLevelAsync (don't really know how to use this,
and rumor is that you need Unity Pro which I do not have)
I have tried setting chunks Active or Deactive based on player
position (This caused more lag than just leaving the blocks alone).
Additional Information:
The terrain is split up into chunks. It is 2d, and I have box colliders on all solid tiles/blocks. Players can dig/place blocks. I am not worried about the amount of time it takes for the level to load initially, but rather about the smoothness of the game while playing it -no lag spikes while playing.
question on Unity Forums
If you're storing each tile as an individual GameObject, don't. Use a texture atlas and 'tile data' to generate the look of each chunk whenever it is dug into or a tile placed on it.
Also make sure to disable, potentially even delete any chunks not within the visible range of the player. Object pooling will help significantly here if you can work out the maximum number of chunks that will ever be needed at once, and just recycle chunks as they go off the screen.
DETAILS:
There is a lot to talk about for the optimal generation, so I'm going to post this link (http://studentgamedev.blogspot.co.uk/2013/08/unity-voxel-tutorial-part-1-generating.html) It shows you how to do it in a 3D space, but the principales are essentially the same if not a little easier for 2D space. The following is just a rough outline of what might be involved, and going down this path will result in huge benefits, but will require a lot of work to get there. I've included all the benefits at the bottom of the answer.
Each tile can be made to be a simple struct with fields like int id, vector2d texturePos, bool visible in it's simplest form. You can then store these tiles in a 2 dimensional array within each chunk, though to make them even more memory efficient you could store the texturePos once elsewhere in the program and write a method to get a texturePos by id.
When you make changes to this 2 dimensional array which represents either the addition or removal of a tile, you update the chunk, which is the actual GameObject used to represent the tiles. By iterating over the tile data stored in the chunk, it will be possible to generate a mesh of vertices based on the position of each tile in the 2 dimensional array. If visible is false, simply don't generate any vertices for it.
This mesh alone could be used as a collider, but won't look like anything. It will also be necessary to generate UV co-ords which happen to be the texturePos. When Unity then displays the mesh, it will display specific points of the texture atlas as defined by the UV co-ords of the mesh.
This has the benefit of resulting in significantly fewer GameObjects, better texture batching for Unity, less memory usage, faster random access for any tile as it's not got any MonoBehaviour overhead, and a genuine plethora of additional benefits.

How to transition from a prerecorded video to real time video?

I have come up with an algorithm on Matlab, that permits me to recognize hand gestures on prerecorded videos. Now, I would like to run the same code but for real time video this time, I am not sure how to do it after putting these 2 lines:
vid=videoinput('winvideo',1);
preview(vid);
(real time video is on)
I am thinking about a loop : while the video is on, snap repeatedly some images in order to analyze them.
for k=1:numFrames
my code is applied here.
So, I would like to know how to make this transition from prerecorded videos to real time video.
Your help in this so much appreciated!
I would suggest you to first verify whether you can perform acquisition + gesture recognition in real-time using your algorithm. for that, first read video frames in a loop and render or save them and compute the reading and rendering overhead of a single frame say t1. Also compute the time taken by your algorithm to process one image say t2. The throughput(no. of frames process per second) of your system will be
throughput = 1/(t1 + t2)
It is important to know how many frames you need to process a gesture. First, try to compute the minimum no. of images that you need to identify a gesture in a given time and then verify in real-time whether you can process the same no. of images in the same time.