How to draw a Pixmap with partial transparency in GTK application - gtk

I'm just getting started with Mono programming using GTK, and have been pleasantly surprised. However, I have come across a hurdle I haven't been able to get over yet.
In the app I'm working on, I am able to load a JPEG image into a Pixmap and draw it to my GUI's Drawing Area. That works fine. However, I want to be able to take a second JPEG image, make it partially transparent, and draw it over the first. So far, I haven't been able to figure out a decent way to do this.
Is it somehow possible to change the alpha value of an entire Pixmap before I draw it? I'm not sure where to go from here.

If you're using GtkDrawingArea you should be using Cairo to do the drawing itself. As an alternative to using cairo_paint() there is a cairo_paint_with_alpha() which lets you specify the opacity you wish to paint with.

Related

How to make PyGTK and Cairo work in parallel?

this may look like this question already appeared and in fact it was already touched in this topic Parallel drawing with GTK and Cairo in Python 3 . However is it possible there is another solution to this problem?
To start with, I have created UI in PyGtk with my custom animations executed with the use of threading (Gtk.threads_enter() and Gtk.threads_leave()). It works good and gives out no errors. I also wanted to add some kind of point cloud and I saw that Cairo may be a good fit for it with supposedly support for Gtk.DrawingArea to handle it.
Here is where problem starts. Due to Cairo using drawing event and pretty much overriding it, it draws over my UI the image I made for Gtk.DrawingArea.Image of cairo drawing over UI
So image appears in DrawingArea and is drawn once again over every UI element. I used this tutorial https://blog.fossasia.org/creating-animations-in-gtk-with-pycairo-in-susi-linux-app/ to draw Cairo animation.
Is there a way to somehow make drawing calls from PyGTK to redraw UI elements, so they are not overdrawn by Cairo?
I answered my own question. Draw call needs to be in its own class for DrawingArea as in example https://blog.fossasia.org/creating-animations-in-gtk-with-pycairo-in-susi-linux-app/
I tried to connect it with my own main class but then it was redrawing whole window instead of redrawing only DrawingArea. So when you want to use Cairo, remember to make DrawingArea a custom class.

Unity sprites don't render properly

I recently came across a problem I can't solve which involves not being able to draw my sprites properly. I have tried a lot of different things and couldn't find any solution.
Here is how the image should look like in unity:
And here is how it actually looks like:
If someone could tell me how to fix this, I would be very grateful.
Presumably the top image is a screenshot of your image manipulation program, many of which use the chequerboard pattern to mean transparency. As such, the image you have exported is a gradient going from almost solid white at the bottom to transparent at the top. This is why the image appears as such in Unity.
Also, if you're wondering why the image appears as though it has bands of different colours, this is due to a problem called colour banding. This can be fixed by using a technique called Dither (which adds some noise to the image), but how you do so will depend on which image manipulation program you are using.

Dragging/Resizing a UIImage on the Device

I'd like to allow a user to add a shape (which would just be a UIImage) onto some sort of canvas, then move and resize it on the screen but I'm not sure how to go about this. Ideally I'd like the basics of a drawing app which can use images from a user's device. Each shape would have an associated position, size and z-index.
The only thing I'm unsure of is how I'd create a bounding box (the one with four blue dots to allow resizing/moving). I have experience with UIKit, and would prefer to keep the majority of the app in this for the time being, but I get the feeling this type of thing might be better suited to Cocos2D or a similar framework.
If anyone has any pointers/open source code I can dig through it would be hugely appreciated.
I think you should look into CALayer, or even CAShapeLayer. I'm just starting to play with them, but I'm pretty sure you can easily get the functionality you want with either. Draw the border in the layer's drawLayer:inContext:. Check out the Quartz2d Guide path drawing section for the functions you need.

Creating a composited image with a transparent background

In Eclipse, I have a view that uses GEF and for some of the figures that I display. I need to paint some background.
For some of the background, I use a system of folders with a set of predefined images (top_left.png, top.png, top_right.png, left.png, middle.png, ...., bottom_right.png) and, while I can recreate the background when needed, it is highly inefficient (especially since it redraws every time the view is being scrolled or a fly is passing by).
To avoid having to recreate the background image everytime, I want to use a cache system: I create an Image object on which I paint each image, and then I cache that image in a map (the key being the dimension of the image).
To be able to have rounded corners, the cache image needs to have a transparent background and this is where I am blocking:
I have tried to set the transparent pixel and paint with the same color but without success
I tried to use ImageData to set the alpha depending on the alpha of each source image but in that case, while the transparency is done well, the image created is all white.
Is there a way in SWT to do a transparent background image that I can paint some images on?
Update:
I found a possible solution by using a BufferedImage from AWT and converting to SWT using code found in http://www.java2s.com/Code/Java/SWT-JFace-Eclipse/ConvertbetweenSWTImageandAWTBufferedImage.htm
While a good base, this code doesn't actually handle transparency, and I modified it quickly (and quite dirtily) to do it. Is there a reliable code for converting Images from AWT to SWT and inverse?
I would prefer some solution to my problem that doesn't involve converting back and forth between images format.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.