I have an application where I receive streaming data through bluetooth and would like to display the streamed data in an image. The data is just RGB values (0-255 for RGB). It's working fine in C#, but I'm having trouble doing the same thing on the iphone. In C# it's being implemented as a queue. When a new row of data arrives, I dequeue one row worth of pixels, then enqueue the new row of pixels. I then write the queue to the image pixel array. This seemed like it would be faster than reading the entire image, shifting it, adding new data, then writing the pixel array. In C# there is a method that allows me to convert a queue to an array, since a queue won't have contiguous memory. My question is, is there a faster way to do this than manually repopulating an array from a queue? Or is there a faster way than using a queue? The image only needs to shift down X number of pixels, then write X number of new pixels to the blank spot in the image. Either way, I can't figure out how to convert the queue to an array, since the only way to access anything besides the first value is by taking values off of the queue. Any suggestions?
The bottleneck on iOS devices won't be in converting a queue to an array, but in getting the bitmap array into the device's display (image or texture) memory. This path can be optimized by using a tiled layer, and only updating the pixels at the end of the end tiles. Then you can scroll the tiled layer inside a scroll view to line up the latest streamed data.
Related
I have a roughly 1000x1000 image, which will be mutated at 60fps, each time changing color of only dozens of pixels.
Then, I need to display this image in Flutter. Of course I can use Image.memory(the_uint8list_data), but it will be extremely costly since it will create 60 new big images in one second. I have also checked ui.Image, but it is opaque and I do not see any handles to change pixel data. I originally used Path-based solutions, but that was also too slow because the mutation can happen for a long time and the Path will be very huge.
The question is, how can I display this image with high performance?
In stm32f746G-Discovery, I want to draw moving graph like drawing ADC output on real time.
Is it possible to make start address of LTDC buffer shifting in every single new ADC data to having graph drawing with minimum CPU intervention and memory transaction? Like by using DSP circular buffer.
One solution maybe is to copy LTDCbuffer by DMA from one <DCbuffer+1 to <DCbuffer itself. And correcting the last vertical line of LCD. But maybe DMA use memory bandwidth.
Update
New solution I think it's better to duplicate the buffer and shifting adress of buffer (window) in every data update. this shifting procedure involve 2x buffer size(because I set shifting periode equal to one window or buffer size).Because data in every row shifting back(left) in LCD in this way, in every update, just adjusting one new column at the right most and construct one new column (by dma2d, in worst case) in other buffer. After completing each window shifting equal to it's length we have constructed same shape ready to jumping to it and progress the graph. Though I want to save matrix of data equal to LCD horizon width 480 and I opted instead of shifting them by DMA, to circulate and update it by it's pointer (I've heard of circular pointer or buffer in F7 but what is the starting point?). I think it's the fastest possible, doesn't it?
I've been working on a project that reads in video frames, stores them in an array and then performs operations on them. The frames are each split into 6 subsections that I have to analyze individually.I had previously been cropping the video beforehand and then was loading it in. I now have the program allow the user to load in the whole movie and then crop each 6th themselves and then the program runs consecutively on each 6th. The problem is that matlab just crashes when loading this now 6-times more pixel dense video in (It's about 120k frames). assuming I can get the user to specify the 6 cropping areas before, is there anyway to load in only a specific area of the movie at a time? Rather than storing the whole frame, only store a 6th? (Unlike how I currently store the whole and THEN crop out a 6th, just store a 6th right off the bat).
VideoReader does not allow you to load in part of a frame into memory. However, it allows you to load in only certain frames from the video into MATLAB instead of loading in the entire video. Agree with sam that loading in 120K frames of video into MATLAB is a very bad idea. Consider using the READ syntax that allows you to specify the start and stop frames to only read in the video in chunks after which you can use array indexing to slice each frame into 6 portions.
Dinesh
I'm trying to track objects in separated frames of a video. If I do a background subtraction before storing the images the size of the images will be much smaller (like one fifth). so I was wondering if I can also read these images faster since most of the pixels are zero. Still simple imread didn't make any difference.
I also tried the pixelregion option for loading only the location of objects and that didn't work either since there are like ten objects in each frame.
It may be faster to store the frames as a video file, rather than individual images. Then you can read them using vision.VideoFileReader from the Computer Vision System Toolbox.
In apps like iDraft and Penultimate, they perform undos and redos very well without any delay.
I tried many approaches. Currently, my testing app writes raw pixel data directly to a file after each undo using [NSData writeToFile:atomically:] but I am getting 0.6s delay.
Can anyone give some hints on it?
I don’t know iDraft nor Penultimate, but chances are they have a simpler drawing model than you have. When writing a drawing app you can choose between two essential drawing representations: either you track raw pixels, or you track drawing objects like lines, circles and so on. (Or, in other words, you choose between pixel and vector representation.)
When you draw using vectors, you don’t track the individual pixels. Instead you know there should be line between points X and Y of given width, color and other params. And when you are to draw such a representation, you call Quartz to stroke the line. In this case the model (the drawing representation) consists of a few numbers, takes little memory and therefore you can have many versions of a single drawing in a memory, allowing for a quick and convenient undo and redo.
Keep your undo stack in memory. Don't write to disk for every operation. Whether you keep around bitmaps or vectors, your file ops shouldn't be on the critical path for every paint operation you do.
If your data model is full bitmaps, keep just the changed rect for undo/redo.
As previously said, you probably don't need to write the data to disk for every operation, also in a pixel based case, unless you are trying to undo a full screen filter all you need to keep is the data contained within the bounding rectangle of the brush stroke that the user performed.
You can double buffer your drawing, i.e. keep a copy of the image before the draw, draw into the copy, determine the bounding rect of the user operation, copy and retain the appropriate data from the original (with size and location information). On undo you take that copy and paste it over the modified area.
This method extends to redo, on undo take the area that you are going to be overwriting and store it.