quickest way to get started with cairo - cairo

I have taken passing shots at learning Cairo in the past, but always moved on in favor of some other graphics library. My problem is that I can't find a good tutorial that gives me a simple display for my surface. I have always ended up digging through GTK or QT documentation about things that have nothing to do with what I want to do. I want to learn Cairo, not a massive OO architecture.
What is a bare bones wrapper to give me a cross-platform window with a Cairo canvas to draw on?

I have used cairo for virtually anything involving drawing. I work at a medical software company, so I prototype scientific data visualization and other things.
I have usually three ways to display my drawings:
A GTK drawing area created with a Python script and GTK;
A PNG image displayed directly on screen using Python Image Library show() method;
A PNG image saved to disk, also via Python Image Library.
A simple script derived from cairographics examples, which actually I use as a template for any new project, is:
import gtk
class Canvas(gtk.DrawingArea):
def __init__(self):
super(Canvas, self).__init__()
self.connect("expose_event", self.expose)
self.set_size_request(800,500)
def expose(self, widget, event):
cr = widget.window.cairo_create()
rect = self.get_allocation()
# you can use w and h to calculate relative positions which
# also change dynamically if window gets resized
w = rect.width
h = rect.height
# here is the part where you actually draw
cr.move_to(0,0)
cr.line_to(w/2, h/2)
cr.stroke()
window = gtk.Window()
canvas = Canvas()
window.add(canvas)
window.set_position(gtk.WIN_POS_CENTER)
window.show_all()
gtk.main()
Or if you prefer not to deal with GUI toolkits, you can create and display an image on screen, and optionally save it to file:
import cairo, Image
width = 800
height = 600
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height)
cr = cairo.Context(surface)
# optional conversion from screen to cartesian coordinates:
cr.translate(0, height)
cr.scale(1, -1)
# something very similar to Japanese flag:
cr.set_source_rgb(1,1,1)
cr.rectangle(0, 0, width, height)
cr.fill()
cr.arc(width/2, height/2, 150, 0, 6.28)
cr.set_source_rgb(1,0,0)
cr.fill()
im = Image.frombuffer("RGBA",
(width, height),
surface.get_data(),
"raw",
"BGRA",
0,1) # don't ask me what these are!
im.show()
# im.save('filename', 'png')

An answer to a related question demonstrates a very simple setup in Gtk2HS to draw on a drawingArea with Cairo.
import Graphics.UI.Gtk
import Graphics.Rendering.Cairo
main :: IO ()
main = do
initGUI
window <- windowNew
drawingArea <- drawingAreaNew
containerAdd window drawingArea
drawingArea `onExpose` (\_ -> renderScene drawingArea)
window `onDestroy` mainQuit
windowSetDefaultSize window 640 480
widgetShowAll window
mainGUI
renderScene :: DrawingArea -> IO Bool
renderScene da = do
dw <- widgetGetDrawWindow da
renderWithDrawable dw $ do setSourceRGBA 0.5 0.5 0.5 1.0
moveTo 100.0 100.0
showText "HelloWorld"
return True
Simply pass your Cairo animation routine to renderWithDrawable dw in renderScene.

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

Layer created is empty even after assigning content to it

I just trying to do a quick prototype following this tutorial:
https://www.youtube.com/watch?v=3zaxrXK7Nac
I'm using my own design for this, the problem is as follows:
When I create a new layer time 8:13 of the posted video and I try to set one of my imported layers as the content of this new layer by using the property image, I get no results.
If I bring this new layer to the screen I can only see black background with transparency, according to the tutorial, it should has the layer I'm assign to it via the image property.
Here is an example of my code:
sketch = Framer.Importer.load("imported/Untitled#2x")
explore_layer = new Layer
width: 750
height: 1334
image: sketch.explore.explore_group
x: screen.width
sketch.Tab_3.on Events.Click, ->
explore_layer.animate
properties:
x: 0
y: 0
curve: "spring(400, 35, 0)"
Here is also a screenshot of my layers
https://gyazo.com/f3fccf7f38813744ea17d259463fabdc
Framer will always import the groups in the selected page of Sketch, and all the groups on that page will transformed into layers that are available on the sketch object directly.
Also: you're now setting the image of a layer to a layer object itself, instead of the image of the sketch layer.
So to get it to work, you need to do a couple of things:
Place all the elements that you want to use on the same page in Sketch
After importing, access those elements directly from the sketch object (so sketch.explore_group instead of sketch.explore.explore_group)
Use the image of the sketch layer, or use the sketch layer itself in your prototype.
Here's an example how it that would look:
sketch = Framer.Importer.load("imported/Untitled#2x")
explore_layer = new Layer
width: 750
height: 1334
image: sketch.explore_group.image
x: screen.width
sketch.Tab_3.on Events.Click, ->
explore_layer.animate
properties:
x: 0
y: 0
curve: "spring(400, 35, 0)"
Or even shorter, and with an updated animation syntax:
sketch = Framer.Importer.load("imported/Untitled#2x")
sketch.explore_group.x = screen.width
sketch.Tab_3.on Events.Click, ->
sketch.explore_group.animate
x: 0
y: 0
options:
curve: Spring(tension:400, friction:35)

World.QueryAABB giving incorrect results in libgdx

I'm trying to implement mouse selection for my game. When I QueryAABB it looks like it's treating objects much larger than they really are.
Here's what's going on in the image
The blue box is an actor containing a body that I'd like to select
The outline on the blue box is drawn by Box2DDebugRenderer
The mouse selects a region on the screen (white box), this is entirely graphical
The AABB is converted to meters and passed to QueryAABB
The callback was called for the blue box and turned it red
The green outline left behind is a separate body to check if my conversions were correct, this is not used for the actual selection process
It seems to be connected to my meter size, the larger it is, the more inaccurate the result is. At 1 meter = 1 pixel it works perfectly.
Meter conversions
val MetersToPixels = 160f
val PixelsToMeters = 1/MetersToPixels
def toMeters(n: Float) = n * PixelsToMeters
def toPixels(n: Float) = n * MetersToPixels
In the image I'm using MetersToPixels = 160f so the inaccuracy is more visible, but I really want MetersToPixels = 16f.
Relevant selection code
val x1 = selectPos.x
val y1 = selectPos.y
val x2 = getX
val y2 = getY + getHeight
val (l,r) =
if (x2 < x1)
(x2,x1)
else
(x1,x2)
val (b,t) =
if (y2 < y1)
(y2,y1)
else
(y1,y2)
world.QueryAABB(selectCallback, toMeters(l),toMeters(b), toMeters(r),toMeters(t))
This code is inside the act method of my CursorActor class. And selectPos represents the initial point where the use pressed down the left mouse button and getX and getY are Actor methods giving the current position. The next bit sorts them because they might be out of order. Then they are converted to meters because they are all in pixel units.
selectCallback: QueryCallback
override def reportFixture(fixture: Fixture): Boolean = {
fixture.getBody.getUserData match {
case selectable: Selectable =>
selected += selectable
true
case _ => true
}
}
Selectable is a trait that sets a boolean flag internally after the query which helps determines the color of the blue box. And selected is a mutable.HashSet[Selectable] defined inside of CursorActor.
Other things possibly worth noting
I'm new to libgdx and box2d.
The camera is scaled x2
My Box2DDebugRenderer uses the camera's combined matrix multiplied by MetersToPixels
From what I was able to gather, QueryAABB is naturally inaccurate for optimization. However, I've hit a roadblock with libgdx because it doesn't have any publicly visible function like b2testOverlap and from what I understand, there's no plan for there to be one any time soon.
I think my best solution would probably be to use jbox2d and pretend that libgdx's physics implementation doesn't exist.
Or as noone suggested I could add it to libgdx myself.
UPDATE
I decided to go with a simple solution of gathering the vertices from the fixture's shape and using com.badlogic.gdx.math.Intersector against the vertices of the selection. It works I guess. I may stop using QueryAABB all together if I decide to switch to using a sensor for the select box.

vb6 Inner Form Resize

Are it's possible to resize vb6 inner form, because if i use Form1.Height or Form1.Width it's including window border height and width, so i just can use this code in one window theme (ex. it's work best in WinXP with XP theme, but not work in WinXP with Classic theme, it's seen too long), any suggestion?
What you can do is compare the Width (the outside size) to the ScaleWidth (which is the inside size) to get the size on the non-client border. Likewise, you can compare the Height to the ScaleHeight to get the non-client size at the top and bottom. From that you can set your final height and width based on the inner (client area) size you want plus the non-client size.
Something like this could go in your Form_Load:
Const DesiredClientHeight as Single = 3435
Const DesiredClientWidth as Single = 3345
Dim fNonClientHoriz As Single, fNonClientVert As Single
fNonClientHoriz = Me.Width - Me.ScaleWidth
fNonClientVert = Me.Height- Me.ScaleHeight
Me.Width = DesiredClientWidth + fNonClientHoriz
Me.Height = DesiredClientHeight + fNonClientVert
Be aware that the form width and height are always in Twips, so if you change your scale mode to something other than twips you will need to account for that.

How to get Pango Cairo to word wrap properly?

I'm having problems getting Pango Cairo to word wrap. Below is some demo code. I am setting the layout's width to the same as the red rectangle, so I would expect it to wrap to the red rectangle. As it is, it is simply putting one word on each line, as though the width was set very small. If I use pango.WRAP_WORD_CHAR, I only get one character to the line.
What am I doing wrong? How do I get the layout to wrap to the width I specified?
EDIT If I set the width to 100000, the words wrap correctly. This implies that the set_width and the construction arguments are using different units. Any ideas?
# -*- coding: utf-8 -*-
import sys
import cairo
import pango
import pangocairo
SIZE = 200
HALF = 100
QUARTER = 50
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, SIZE, SIZE)
context = cairo.Context(surface)
context.set_source_rgb(1, 0, 0)
context.rectangle(QUARTER, QUARTER, HALF, HALF)
context.fill()
context.set_source_rgb(1, 1, 0)
context.translate(QUARTER, QUARTER)
pangocairo_context = pangocairo.CairoContext(context)
layout = pangocairo_context.create_layout()
layout.set_width(HALF)
layout.set_alignment(pango.ALIGN_LEFT)
layout.set_wrap(pango.WRAP_WORD)
layout.set_font_description(pango.FontDescription("Arial 10"))
layout.set_text("The Quick Brown Fox Jumps Over The Piqued Gymnast")
pangocairo_context.update_layout(layout)
pangocairo_context.show_layout(layout)
context.show_page()
with file("test.png", "w") as op:
surface.write_to_png(op)
I found the answer. The value must be multiplied by pango.SCALE. This doesn't seem to be mentioned in the documentation for the function in the C API.