In apps like iDraft and Penultimate, they perform undos and redos very well without any delay.
I tried many approaches. Currently, my testing app writes raw pixel data directly to a file after each undo using [NSData writeToFile:atomically:] but I am getting 0.6s delay.
Can anyone give some hints on it?
I don’t know iDraft nor Penultimate, but chances are they have a simpler drawing model than you have. When writing a drawing app you can choose between two essential drawing representations: either you track raw pixels, or you track drawing objects like lines, circles and so on. (Or, in other words, you choose between pixel and vector representation.)
When you draw using vectors, you don’t track the individual pixels. Instead you know there should be line between points X and Y of given width, color and other params. And when you are to draw such a representation, you call Quartz to stroke the line. In this case the model (the drawing representation) consists of a few numbers, takes little memory and therefore you can have many versions of a single drawing in a memory, allowing for a quick and convenient undo and redo.
Keep your undo stack in memory. Don't write to disk for every operation. Whether you keep around bitmaps or vectors, your file ops shouldn't be on the critical path for every paint operation you do.
If your data model is full bitmaps, keep just the changed rect for undo/redo.
As previously said, you probably don't need to write the data to disk for every operation, also in a pixel based case, unless you are trying to undo a full screen filter all you need to keep is the data contained within the bounding rectangle of the brush stroke that the user performed.
You can double buffer your drawing, i.e. keep a copy of the image before the draw, draw into the copy, determine the bounding rect of the user operation, copy and retain the appropriate data from the original (with size and location information). On undo you take that copy and paste it over the modified area.
This method extends to redo, on undo take the area that you are going to be overwriting and store it.
Related
I have tried using the methods "Transform.scale" (for "zooming" ) and "Transform.translate" (for the moving), but they seem to trigger the paint method in the CustomPainter.
(even though the method "shouldRepaint" returns false, but that method is not even invoked)
Maybe there is some other way of doing what I want i.e. be able to move and zoom something I have created with a Canvas (without again executing the paint method)?
In the below example code there are three sliders, one for zooming out (i.e. reducing the size) and one for moving horizontally and one for moving vertically.
The "paint" method simply draws a polygon (see the below attached screenshot picture).
While the example code below is simple (i.e. small amount of hardcoded vector data and fast to render), I want to emphasize that the solution I am looking for need to support MANY complicated drawings (with LOTS OF data, slow to render), i.e. it is not an acceptable solution to suggest something like instead manually converting this one vector image to a raster image (e.g. PNG/JPG/GIF).
Below I try do describe how you can think regarding a scenario that need to be supported:
Imagine you want to implement an app with a huge dropdown list with lots of different vector data images to be selected.
The data of those VECTOR images may be retrieved from the internet or from a big local SQLite database.
IMPORTANT: The DATA in those images are NOT raster images such as jpg, png, gif... but the VECTOR data to become retrieved is defined as lots of screen coordinates for points, lines, polygons, and textual labels, and icons, and color values... and so on.
Such VECTOR data will then be used for creating the image, and as far as I understand you should use CustomPainter with the paint method unless there are better options?
Also imagine that each of such selected image with vector data is HUGE with MANY THOUSANDS of lines, polygons, icons, ... and so on, and that the paint method might take seconds for creating the image.
BUT, once it is drawn the data will not change.
So, since the "paint" method might take seconds to render a huge amount of vector data, you want to avoid invoking it frequently, when moving or zooming.
Therefore I think it would be desirable if it would be possible to use the method "shouldRepaint" to return false, but it seems as that method is not even invoked at all when resizing or moving with the Transform methods "scale" and "translate".
But maybe there is some other solution to support the above described scenario, maybe some other class than CustomPainter that do not automatically trigger the paint method when applying Transform scale/translate ?
I hope there is a solution with the Flutter framework somehow being able to automatically reuse the bitmap (e.g color values at certain bits and bytes) that was created potentially slowly with a paint method but can scale/zoom and move it in a faster way than having to execute the paint method again.
If you just want to zoom, scale, pan on your images, then you can try the new InteractiveViewer widget.
InteractiveViewer class
A widget that enables pan and zoom
interactions with its child.
The user can transform the child by dragging to pan or pinching to
zoom.
https://api.flutter.dev/flutter/widgets/InteractiveViewer-class.html
I am working on the real-time plot application where a stream of data is to be plotted on screen. Earlier using gtkmm2 I had done this using a custom widget (derived from Gtk::Bin) where I have a member function which creates a cairo context and does the plotting.
Now with gtkmm3 I am unable to plot in any method other than on_draw. Here's what my custom draw method body looks like
Gtk::Allocation oAllocation = get_allocation();
Glib::RefPtr <Gdk::Window> refWindow = get_window();
Cairo::RefPtr <Cairo::Context> refContext =
refWindow->create_cairo_context();
refWindow->begin_paint_rect(oAllocation); //added later
refContext->save();
refContext->reset_clip();
refContext->set_source_rgba(1,
1,
1,
1);
refContext->move_to(oAllocation.get_x(),
oAllocation.get_y());
refContext->line_to(oAllocation.get_x()
+ oAllocation.get_width(),
oAllocation.get_y()
+ oAllocation.get_height());
refContext->stroke();
refContext->restore();
refWindow->end_paint();
Initially I derived the class from Gtk::DrawingArea then tried with Gtk::Bin while adding the begin_paint_rect call.
Is it forbidden to draw in any place other than on_draw?
For something like a plot (or anything that is rather complex to draw) I advise to use a buffer; I lost a month of my life because I read that gtkmm3 does buffering so that using "double buffering" isn't needed anymore (as opposed to gtkmm2), but it aint that simple (read: that isn't true).
So, what you should do is just draw to your own surface; and every time you change something call queue_draw_region or queue_draw_area.
Then in on_draw get the list of clip rectangles and copy those from your private surface to the cr that is passed to the on_draw function. Cairo normally does
the exact same thing (or so they claim), copying what you just copied again, to the screen; so you should turn that off (this should be possible I read).
The reason you can't use Cairo's buffering is because it doesn't KEEP that buffer; what you get is some corrupted surface, so you are forced to redraw EVERYTHING inside the clip rectangle list. That wouldn't be too bad if you (your application) was the only one making changes (as per your queue_draw_* calls): then you could set a flag, invalidate the part(s) that needs redrawing and simply postpone the draw until you get to on_draw. But sometimes on_draw is called for other reasons, for example, when you open a menu that goes over your drawing area. I think this is a bug (or a design error) but it is the way it is. The result is that you can't know what you have to redraw EXCEPT by looking at the clip rectangle list; which makes it incredibly hard to just draw a part of your area unless your drawing is made up of many separate rectangles (like, say, a chess board). The only feasible way is to keep a full copy of the image in memory (your private surface) and just copy the clip rectangle list from there when in on_draw.
Is it forbidden to draw in any place other than on_draw?
Basically: Yes.
The idea is that you call gtk_widget_queue_draw() or gtk_widget_queue_draw_area() when you want to cause a redraw.
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw-area
How would I efficiently draw a CGPath on a CATiledLayer? I'm currently checking if the bounding box of the tile intersects the bounding box of the path like this:
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
CGRect boundingBox = CGPathGetPathBoundingBox(drawPath);
CGRect rect = CGContextGetClipBoundingBox(context);
if( !CGRectIntersectsRect(boundingBox, rect) )
return;
// Draw path...
}
This is not very efficient as the drawLayer:inContext: is called multiple times from multiple threads and results in drawing the path many times.
Is there a better, more efficient way to do this?
The simplest option is to draw your curve into a large image and then tile the image. But if you're tiling, it probably means the image would be too large, or you would have just drawn the path in the first place, right?
So you probably need to split your path up. The simplest approach is to split it up element by element using CGPathApply. For each element, you can check its bounding box and determine if that element falls in your bounds. If not, just keep track of the last end point. If so, then move to the last end point you saw and add the element to a new path for this tile. When you're done, each tile will draw its own path.
Technically you will "draw" things that go outside your bounds here (such as a line that extends beyond the tile), but this is much cheaper than it sounds. Core Graphics is going to clip single elements very easily. The goal is to avoid calculating elements that are not in your bounding box at all.
Be sure to cache the resulting path. You don't need to calculate the path for every tile; just the ones you're drawing. But avoid recalculating it every time the tile draws. Whenever the data changes, dump your cache. If there are a very large number of tiles, you can also use NSCache to optimize this even better.
You don't show where the path gets created. If possible, you might try building the path up in the -drawLayer:inContext: method, only creating the portion of it needed for the tile being drawn.
As with all performance problems, you should use Instruments to profile your code and find out exactly where the bottlenecks are. Have you tried that already, and if so, what did you find?
As a side note, is there a reason you're using CGPath instead of UIBezierPath? From Apple's documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'm new to game programming. And i have a question. I want to have a dotted circle to be drawn on the screen. I can use one big sprite (for example 256x256 pixels) which contains all the circle or i can use many small sprites representing dots.
I use cocos2d libs and i'm able to render using batch. So what is the best way to perform such tasks ?
In my opinion your best bet (if all the dots are the same) is to have one sprite of the dot, and repeat it in the shape you are looking for.
Generally you'll want a single asset for each unique graphic. You can combine those assets into a single sprite and reuse them. This allows for more flexibility as well as speed.
Most of todays graphics hardware is optimized to texture dimensions that are a power of two. Your sprites are likely to have other dimensions. By using sprites, you can minimize the padding that is needed to fill this space (and thus, minimize CPU/GPU cycles spent on correcting this internally). Besides that, the file size will be smaller, since you need less overhead and compression is likely to be more effective.
Go with one large sprite. It's fewer calls into the rendering engine, and adds flexibility to change the look (for example, if you decide to have the circle made of dashed lines rather than dots).
Simple Painting:
Fingering the iPhone screen paints temporary graphics to a UIView. These temporary graphics are erased after a touch ends, and are then stored into an underlying UIView.
The process is simple:
1) Touch Starts & Moves >>
2) Paint Temporary Graphics on top UIView>>
3) Touch Ends >>
4) Pass Temporary Graphics To underlying UIView >>
5) Underlying UIView adds Temporary Graphics to Stored Graphics >>
6) Underlying UIView Re-Draws all Stored Graphics >>
7) Delete Temporary Graphics on top UIView.
In this manner, I can accumulate graphics on the underlying UIView while maintaining responsive painting of the temporary graphics on the top UIView.
(Sidenote: Each "Drawing" is simply an NSArray of custom "Point" Objects which are just NSObject containers for CGPoints. And the underlying UIView has a seperate NSArray, where it stores these NSArrays of CGPoints)
The Problem Is:
When a great deal of graphics has accumulated on the underlying UIView, it takes time to draw it all out on the screen. And any new drawings on the top UIView will not be displayed until the drawing of the underlying stored graphics is complete. Thus, there is a noticeable lag when many graphics are on the screen.
Question:
Can anyone think of a good way to improve performance here, so that there is no noticable lag between drawings when there are a lot of graphics on the screen?
An NSArray of CGPoints? You mean an NSArray of NSValues holding CGPoints? That's an incredibly time-expensive way to hold what has to be a huge number of values that you access constantly. You could store this information in many better ways. A 2-dimensional C-array representing the entire screen is the most obvious. You may also want to look into bitmap image representations, and draw directly into a CGImage rather than maintaining a bunch of CGPoints. Take a look at the Quartz 2D Programming Guide.
EDIT:
Your object (below) is the equivalent of an NSValue, just a little more specialized. There's a lot of overhead going on here when you have many, many objects (~100,000 I'm guessing when the screen is nearly full; more if you're not removing duplicates; run Instruments to profile it). Old-style C data structures are likely to be much faster for this, because you can avoid all the retains/releases, allocations, etc. There are other options, though. Duplicate point checking would be much faster with an NSMutableSet if you pixel-align your CGPoints, and overload -isEqual on your Point object.
Do make sure you're pixel-aligning your data. Drawing on fractional pixels (and storing them all), could dramatically increase the number of objects involved and the amount of drawing you're doing. Even if you want the anti-aliasing, at least round the pixels to .5 (or .25 or something). A CGPoint is made up of two doubles. You don't need that kind of precision to draw to the screen.
Why not just draw everything onto a CGBitmapContextRef buffer so the drawing operations will accumulate, and then draw that to the screen in your drawRect:? You will be able to perform arbitrary graphics operations without slowing down as the total number of operations increases.
If undo support is necessary, one could always keep a copy for each change made and invalidate the oldest copies when a memory warning is received. (or for an even fancier solution, store the operations as you do now, but keep a cached copy every dozen user operations or so)