I would like to know how does Java paint JComponents on the screen?
I am usign AspectJ to instrument all calls to Graphics2D methods. For example, whenever there is a call to Graphics2d.Draw(Shape) from the application I am able to capture that call.
Does Java use the Graphics class to also paint JComponents, lets say JButtons? For example, there is a JButton on my application and I want to capture the methods to graphics class for painting that JButton on the screen? I don't know how to do this.
Any ideas?
UPDATE:
I tried to override the JButton class and the PaintComponent method. If I do this than I can capture the calls to Graphics methods used to draw that JButton.
How can I capture those calls without overriding the specific JComponent class?
yourJButton.repaint(); causes swing to invoke the paint() method. You should not invoke paint directly, but should instead use the repaint method to schedule the component for drawing. This method actually delegates the work of painting to three protected methods: paintComponent, paintBorder, and paintChildren.
Related
I have a LeafRenderObjectWidget and want to update my view according to the new widget parameters. for example, I have drawn a line and wish to update the line using scale and don't want to paint it again on the next build.
the problem is in the paint function of RenderObject I have access to PaintingContext but it is not the previous one. so I can't use context.canvas.save() and restore it again in the paint function.
When creating RenderObjects from scratch it's easy to get carried away, but my advice is to not fear repainting. When you call methods on a canvas all it's doing is adding commands to a list that eventually get sent to the raster thread, no actual rendering work is being done here.
It sounds like you are confused on the purpose of canvas.save and canvas.restore. These methods do not save what you painted to the canvas, they only touch the paint transform set by scale, skew, transform and translate.
this may look like this question already appeared and in fact it was already touched in this topic Parallel drawing with GTK and Cairo in Python 3 . However is it possible there is another solution to this problem?
To start with, I have created UI in PyGtk with my custom animations executed with the use of threading (Gtk.threads_enter() and Gtk.threads_leave()). It works good and gives out no errors. I also wanted to add some kind of point cloud and I saw that Cairo may be a good fit for it with supposedly support for Gtk.DrawingArea to handle it.
Here is where problem starts. Due to Cairo using drawing event and pretty much overriding it, it draws over my UI the image I made for Gtk.DrawingArea.Image of cairo drawing over UI
So image appears in DrawingArea and is drawn once again over every UI element. I used this tutorial https://blog.fossasia.org/creating-animations-in-gtk-with-pycairo-in-susi-linux-app/ to draw Cairo animation.
Is there a way to somehow make drawing calls from PyGTK to redraw UI elements, so they are not overdrawn by Cairo?
I answered my own question. Draw call needs to be in its own class for DrawingArea as in example https://blog.fossasia.org/creating-animations-in-gtk-with-pycairo-in-susi-linux-app/
I tried to connect it with my own main class but then it was redrawing whole window instead of redrawing only DrawingArea. So when you want to use Cairo, remember to make DrawingArea a custom class.
I am searching (did not find yet) for a library that can paint to a CanvasPixelArray object with filling abilities.
I wan to be able to call beginFill function, draw some lines and than call endFill.
Is there such library?
Maybe GWT has such functionality?
As the apple's document said, UIGraphicsGetCurrentContext can only be used in drawRect method.
If you want to use it in another place, you have to push a context before.
Now I want to use UIGraphicsGetCurrentContext to get a context in my own method called render.
How can I get a context to push?
Can I get the context in drawRect and save it in a non-local variable?
And push it in another method, then use UIGraphicsGetCurrentContext to get it to use.
If so, why need I push it and get it again? I can use the non-local variable directly.
You can call setNeedsDisplay of the view that you need redrawn on timer, and have its drawRect call your render (instead of calling your render on timer directly). This way you would avoid unusual manipulations with your CG Context, and prevent rendering when the rectangle has been scrolled off the screen.
Edit:
You use UIGraphicsPushContext and UIGraphicsPopContext when you want a specific context to become the context on which UI Kit operates. My initial understanding of what they do was incorrect (I'm relatively new to iOS development myself). For example, there are operations (e.g. some operations setting a color or other drawing parameters) that operate implicitly on the current context. If you set up a context for, say, drawing on a bitmap, and then you want to use an operation that modifies the state of the current context (i.e. an operation that modifies the context parameters, but does not take a specific context reference as a parameter), you push the bitmap context to make it current, perform the operation that implicitly references it, and pop the context right back.
Special thanks go to rob mayoff for explaining this to me.
To get a CGContext into which you can draw in a render call outside of a drawRect, you can allocate your own graphics bitmap, and create a context from that bitmap. Then you can draw to that context anytime.
If you wish to display that context after drawing into it, you can use it to create an image, and then draw that image to a view during the's UIView's drawRect. Or, alternatively, you could assign that image to a view's CALayer's contents, which should be flushed to the display sometime during the UI run loop's processing.
You can use CGContextSaveCGState and CGContextRestoreCGState to push and pop your current graphic context. At this link a simple example.
I am new to iPhone application development and currently working on a simple paint application for iPhone. I use GLPaint source code to start with. I tried to change the brush size using the following ways.
I created a UIViewController class and linked it to GLPaint.PaintingView and added different buttons to indicate different brush size.
Tried to dynamically pass images with different images. But initWithCoder was called only when the paint view loads and so the brush image #"Particle.png" is not getting changed
Tried extracting the logic in initWithCoder to another method that takes in brush string as parameter. So that I could call the extracted method while selecting a brush button. Since the brush buttons are in another View/Viewcontroller, the change in image is not applied.
Is there any method to change the brush size just like "(void)setBrushColorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue" that changes color?
Any help will be very much useful. Thank you.
Did you try changing kBrushSize - which is used in call to glPointSize() in -initWithCoder: ?