vImageAlphaBlend crashes - swift

I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.

vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.

Related

CoreImage: CIImage write JPG is shifting colors [macOS]

Using CoreImage to filter photos, I have found that saving to JPG file will result in an image that has a subtle but visible blue hue. In this example using a B&W image, the histogram reveals how the colors have been shifted in the saved file.
---Input
Output [] Histogram shows the color layers are offset
-- Issue demonstrated with MacOS 'Preview' App
I can show a similar result using only the Preview App.
test image here: https://i.stack.imgur.com/Y3f03.jpg
Open the JPG image using Preview.
Export to JPEG at any 'Quality' other than the default (85%?)
Open the exported file and look at the Histogram, and the same color shifting can be seen as I experience within my app.
-- Issue demonstrated in custom MacOS App
The code here is as bare bones as possible, creating a CIImage from the photo and immediately saving it without performing any filters. In this example I chose 0.61 for compression as it resulted in a similar file size as the original. The distortion seems to be broader if using a higher compression ratio, but I could not find any value that would eliminate it.
if let img = CIImage(contentsOf: url) {
let dest = procFolder.url(named: "InOut.jpg")
img.jpgWrite(url: dest)
}
extension CIImage {
func jpgWrite(url: URL) {
let prop: [NSBitmapImageRep.PropertyKey: Any] = [
.compressionFactor: 0.61
]
let bitmap = NSBitmapImageRep(ciImage: self)
let data = bitmap.representation(using: NSBitmapImageRep.FileType.jpeg, properties: prop)
do {
try data?.write(to: url, options: .atomic)
} catch {
log.error(error)
}
}
}
Update 1: Using #Frank Schlegel's answer for saving JPG file
The JPG now carries a Color Sync Profile, and I can (unscientifically) track a ~10% performance boost for portrait images (less for landscape), which are nice improvements. But, unfortunately the resulting file is still skewing the colors in the same way demonstrated in the histograms above.
extension CIImage {
static let writeContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [
// using an extended working color space allows you to retain wide gamut information, e.g., if the input is in DisplayP3
.workingColorSpace: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
.workingFormat: CIFormat.RGBAh // 16 bit color depth, needed in extended space
])
func jpgWrite(url: URL) {
// write the output in the same color space as the input; fallback to sRGB if it can't be determined
let outputColorSpace = colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)!
do {
try CIImage.writeContext.writeJPEGRepresentation(of: self, to: url, colorSpace: outputColorSpace, options: [:])
} catch {
}
}
}
Question:
How can I open a B&W JPG as a CIImage, and re-save a JPG file avoiding any color shifting?
This looks like a color sync issue (as Leo pointed out) – more specifically a mismatch/misinterpretation of color spaces between input, processing, and output.
When you are calling NSBitmapImageRep(ciImage:), there's actually a lot happening under the hood. The system actually needs to render the CIImage you are providing to get the bitmap data of the result. It does so by creating a CIContext with default (device-specific) settings, using it to process your image (with all filters and transformations applied to it), and then giving you the raw bitmap data of the result. In the process, there are multiple color space conversions happening that you can't control when using this API (and seemingly don't lead to the result you intended). I don't like these "convenience" APIs for rendering CIImages for this reason and I see a lot of questions on SO that are related to them.
I recommend you instead use a CIContext to render your CIImage into a JPEG file. This gives you direct control over color spaces and more:
let input = CIImage(contentsOf: url)
// ideally you create this context once and re-use it because it's an expensive object
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [
// using an extended working color space allows you to retain wide gamut information, e.g., if the input is in DisplayP3
.workingColorSpace: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
.workingFormat: CIFormat.RGBAh // 16 bit color depth, needed in extended space
])
// write the output in the same color space as the input; fallback to sRGB if it can't be determined
let outputColorSpace = input.colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)!
context.writeJPEGRepresentation(of: input, to: dest, colorSpace: outputColorSpace, options: [kCGImageDestinationLossyCompressionQuality: 0.61])
Please let me know if you still see a discrepancy when using this API.
I never found the underlying cause of this issue and therefore no 'true' solution as I was seeking. In discussion with #Frank Schlegel, it led to a belief that it is an artifact of Apple's jpeg converter. And the issue was certainly more apparent when using test files that appear monochrome but actually had small amount of color info in them.
The simplest fix for my app was to ensure there was no color in the source image, so I drop the saturation to 0 prior to saving the file.
let params = [
"inputBrightness": brightness, // -1...1, This filter calculates brightness by adding a bias value: color.rgb + vec3(brightness)
"inputContrast": contrast, // 0...2, this filter uses the following formula: (color.rgb - vec3(0.5)) * contrast + vec3(0.5)
"inputSaturation": saturation // 0...2
]
image.applyingFilter("CIColorControls", parameters: params)

MTKView refresh issue

I am compositing an array of UIImages via an MTKView, and I am seeing refresh issues that only manifest themselves during the composite phase, but which go away as soon as I interact with the app. In other words, the composites are working as expected, but their appearance on-screen looks glitchy until I force a refresh by zooming in/translating, etc.
I posted two videos that show the problem in action: Glitch1, Glitch2
The composite approach I've chosen is that I convert each UIImage into an MTLTexture which I submit to a render buffer set to ".load" which renders a poly with this texture on it, and I repeat the process for each image in the UIImage array.
The composites work, but the screen feedback, as you can see from the videos is very glitchy.
Any ideas as to what might be happening? Any suggestions would be appreciated
Some pertinent code:
for strokeDataCurrent in strokeDataArray {
let strokeImage = UIImage(data: strokeDataCurrent.image)
let strokeBbox = strokeDataCurrent.bbox
let strokeType = strokeDataCurrent.strokeType
self.brushStrokeMetal.drawStrokeImage(paintingViewMetal: self.canvasMetalViewPainting, strokeImage: strokeImage!, strokeBbox: strokeBbox, strokeType: strokeType)
} // end of for strokeDataCurrent in strokeDataArray
...
func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect, strokeType: brushTypeMode) {
// set up proper compositing mode fragmentFunction
self.updateRenderPipeline(stampCompStyle: drawStampCompMode)
let stampTexture = UIImageToMTLTexture(strokeUIImage: strokeUIImage)
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampUse: stampUseMode.strokeBezier, stampCorners: stampCorners, stampColor: stampColor)
self.renderStampSingle(stampTexture: stampTexture)
} // end of func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)
func renderStampSingle(stampTexture: MTLTexture) {
// this routine is designed to update metalDrawableTextureComposite one stroke at a time, taking into account
// whatever compMode the stroke requires. Note that we copy the contents of metalDrawableTextureComposite to
// self.currentDrawable!.texture because the goal will be to eventually display a resulting composite
let renderPassDescriptorSingleStamp: MTLRenderPassDescriptor? = self.currentRenderPassDescriptor
renderPassDescriptorSingleStamp?.colorAttachments[0].loadAction = .load
renderPassDescriptorSingleStamp?.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptorSingleStamp?.colorAttachments[0].texture = metalDrawableTextureComposite
// Create a new command buffer for each tessellation pass
let commandBuffer: MTLCommandBuffer? = commandQueue.makeCommandBuffer()
let renderCommandEncoder: MTLRenderCommandEncoder? = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptorSingleStamp!)
renderCommandEncoder?.label = "Render Command Encoder"
renderCommandEncoder?.setTriangleFillMode(.fill)
defineCommandEncoder(
renderCommandEncoder: renderCommandEncoder,
vertexArrayStamps: vertexArrayStrokeStamps,
metalTexture: stampTexture) // foreground sub-curve chunk
renderCommandEncoder?.endEncoding() // finalize renderEncoder set up
//begin presentsWithTransaction approach (needed to better synchronize with Core Image scheduling
copyTexture(buffer: commandBuffer!, from: metalDrawableTextureComposite, to: self.currentDrawable!.texture)
commandBuffer?.commit() // commit and send task to gpu
commandBuffer?.waitUntilScheduled()
self.currentDrawable!.present()
// end presentsWithTransaction approach
self.initializeStampArray(stampUse: stampUseMode.strokeBezier) // clears out the stamp array in preparation of next draw call
} // end of func renderStampSingle(stampTexture: MTLTexture)
First of all, the domain Metal is very deep, and it's use within the MTKView construct is sparsely documented, especially for any applications that fall outside the more traditional gaming paradigm. This is where I have found myself in the limited experience I have accumulated with Metal with the help from folks like #warrenm, #ken-thomases, and #modj, whose contributions have been so valuable to me, and to the Swift/Metal community at large. So a deep thank you to all of you.
Secondly, to anyone troubleshooting metal, please take note of the following: If you are getting the message:
[CAMetalLayerDrawable present] should not be called after already presenting this drawable. Get a nextDrawable instead
please don't ignore it. It mays seem harmless enough, especially if it only gets reported once. But beware that this is a sign that a part of your implementation is flawed, and must be addressed before you can troubleshoot any other Metal-related aspect of your app. At least this was the case for me. As you can see from the video posts, the symptoms of having this problem were pretty severe and caused unpredictable behavior that I was having a difficult time pinpointing the source of. The thing that was especially difficult for me to see was that I only got this message ONCE early on in the app cycle, but that single instance was enough to throw everything else graphically out of whack in ways that I thought were attributable to CoreImage and/or other totally unrelated design choices I had made.
So, how did I get rid of this warning? Well, in my case, I assumed that having the settings:
self.enableSetNeedsDisplay = true // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = true // needed so the draw() loop does not get called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage (such as simultaneously turning on a layer while also clearing MTKView)
meant that I could pretty much call currentDrawable!.present() or commandBuffer.presentDrawable(view.currentDrawable) directly whenever I wanted to refresh the screen. Well, this is not the case AT ALL. It turns out these calls should only be made within the draw() loop and only accessed via a setNeedsDisplay() call. Once I made this change, I was well on my way to solving my refresh riddle.
Furthermore, I found that the MTKView setting self.isPaused = true (so that I could make setNeedsDisplay() calls directly) still resulted in some unexpected behavior. So, instead, I settled for:
self.enableSetNeedsDisplay = false // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = false // draw() loop gets called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage
as well as modifying my draw() loop to drive what kind of update to carry out once I set a metalDrawableDriver flag AND call setNeedsDisplay():
override func draw(_ rect: CGRect) {
autoreleasepool(invoking: { () -> () in
switch metalDrawableDriver {
case stampRenderMode.canvasRenderNoVisualUpdates:
return
case stampRenderMode.canvasRenderClearAll:
renderClearCanvas()
case stampRenderMode.canvasRenderPreComputedComposite:
renderPreComputedComposite()
case stampRenderMode.canvasRenderStampArraySubCurve:
renderSubCurveArray()
} // end of switch metalDrawableDriver
}) // end of autoreleasepool
} // end of draw()
This may seem round-about, but it was the only mechanism I found to get consistent user-driven display updates.
It is my hope that this post describes an error-free and viable solution that Metal developers may find useful in the future.

Having issues with drawing to a frame buffer texture. It draws blank

I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone

Get size of repeated pattern from UIColor?

I can query if a UIColor is a pattern by inspecting the CGColor instance it wraps, the CGColorGetPattern() function returns the pattern if it exist, or null if it is not a pattern color.
CGPatternCreate() method requires a bounds when creating a pattern, this value defines the size of the pattern tile (Known as cell in Quartz parlance).
How would I go about to retrieve this pattern size from a UIColor, or the backing CGPattern once it has been created?
If your application is intended for internal distribution only, then you can use a private API. If you look at the functions defined in the CoreGraphics framework, you will see that there is a bunch of functions and among them one called CGPatternGetBounds:
otool -tV /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS4.3.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics | egrep "^_CGPattern"
You just have to make some function lookup on the framework and use it through a function pointer.
The header to include:
#include <dlfcn.h>
The function pointer:
typedef CGRect (*CGPatternGetBounds)(CGPatternRef pattern);
The code to retrieve the function:
void *handle = dlopen("/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics", RTLD_NOW);
CGPatternGetBounds getBounds = (CGPatternGetBounds) dlsym(handle, "CGPatternGetBounds");
The code to retrieve the bounds:
UIColor *uicolor = [UIColor groupTableViewBackgroundColor]; // Select a pattern color
CGColorRef color = [uicolor CGColor];
CGPatternRef pattern = CGColorGetPattern(color);
CGRect bounds = getBounds (pattern); // This result is a CGRect(0, 0, 84, 1)
I dont think its possible to get the bounds from the CGPatternRef, if thats what you're asking
There does not appear to be any way to directly retrieve any information from the CGPatternRef.
If you must do this, probably the only way (besides poking at the private contents of the CGPattern struct, which probably counts as "using private APIs") is to render the pattern to a sufficiently large image buffer and then detect the repeating subunit. Finding repeating patterns/images in images may be a good starting point for that.
Making your own color class that stores the bounds and let you access them through a property might be a viable solution.
Extracting the pattern bounds from UIColor doesn't seem to be possible.

Can I use icons with RGBA transparency when using GTK+ drag and drop?

I am in the process of adding drag and drop support to an existing Mono/C#/GTK# application. I was wondering whether it was possible to use RGBA transparency on the icons that appear under the mouse pointer when I start dragging an object.
So far, I realized the following:
I can set the bitmap in question by calling the Gtk.Drag.SourceSetIconPixbuf() method. However, no luck with alpha transparency: pixels that are not fully opaque would get 100% transparent this way.
I also tried calling RenderPixmapAndMask() on the GdkPixbuf so that I could use Gtk.Drag.SourceSetIcon() with an RGBA colormap of my Screen. It didn't work either: whenever I started dragging, I got the following error:
[Gdk] IA__gdk_window_set_back_pixmap: assertion 'pixmap == NULL || gdk_drawable_get_depth (window) == gdk_drawable_get_depth (pixmap)' failed.
This way, the pixmap doesn't even get copied, only a white shape (presumably set by the mask argument of SetSourceIcon()) shows up on dragging.
I'd like to ask if there's a way to make these icons have alpha transparency, despite the fact that I failed to do so. In case it's impossible, answers discussing the reasons of the lack of this feature would also be helpful. Thank you.
(Compositing is - of course - enabled on my desktop (Ubuntu/10.10, Compiz/0.8.6-0ubuntu9).)
Ok, finally I solved it. You should create a new Gtk.Window of POPUP type, set its Colormap to your screen's RGBA colormap, have the background erased by Cairo to a transparent color, draw whatever you'd like on it and finally pass it on to Gtk.Drag.SetIconWidget().
Sample code (presumably you'll want to use this inside OnDragBegin, or at a point where you have a valid drag context to be passed to SetIconWidget()):
Gtk.Window window = new Gtk.Window (Gtk.WindowType.Popup);
window.Colormap = window.Screen.RgbaColormap;
window.AppPaintable = true;
window.Decorated = false;
window.Resize (/* specify width, height */);
/* The cairo context can only be created when the window is being drawn by the
* window manager, so wrap drawing code into an ExposeEvent delegate. */
window.ExposeEvent += delegate {
Context ctx = Gdk.CairoHelper.Create (window.GdkWindow);
/* Erase the background */
ctx.SetSourceRGBA (0, 0, 0, 0);
ctx.Operator = Operator.Source;
ctx.Paint ();
/* Draw whatever you'd like to here, and then clean up by calling
Dispose() on the context's target. */
(ctx.Target as IDisposable).Dispose ();
};
Gtk.Drag.SetIconWidget(drag_context, window, 10, 10);