refresh drawing area in gtk - gtk

I have a bunch of drawing areas (they are actually cairo surfaces, but I don't think it matters too much) in a scrolled window, and I would like to refresh the drawings. However, when I redraw the images, they are not shown till I scroll the window up and down. After that the figures are correct, so I have to conclude that the drawing routine itself is proper. I have also included a
while Gtk.events_pending():
Gtk.main_iteration()
loop to wait for all pending operations, but that does not solve the problem. Could someone point out to me what else is missing?
Thanks,
v923z
OK, so the larger chunks of the code. First, a class defining the a drawing area onto which I am going to paint (note that the body is not indented properly! I don't know how to indent larger pieces of code here):
class Preview:
def __init__(self):
self.frame = Gtk.Frame()
self.frame.set_shadow_type(Gtk.ShadowType.IN)
self.frame.show()
self.da = Gtk.DrawingArea()
self.da.set_size_request(200, 300)
self.da.connect('configure-event', self.configure_event)
self.da.connect('draw', self.on_draw)
self.frame.add(self.da)
self.da.show()
def configure_event(self, da, event):
allocation = da.get_allocation()
self.surface = da.get_window().create_similar_surface(cairo.CONTENT_COLOR,
allocation.width,
allocation.height)
cairo_ctx = cairo.Context(self.surface)
cairo_ctx.set_source_rgb(1, 1, 1)
cairo_ctx.paint()
return True
def on_draw(self, da, cairo_ctx):
cairo_ctx.set_source_surface(self.surface, 0, 0)
cairo_ctx.paint()
return True
pass
Next, the point where I actually create the drawing area. viewport_preview is a viewport created in glade.
self.previews = []
self.widget('viewport_preview').remove(self.vbox_preview)
self.vbox_preview = Gtk.VBox(homogeneous=False, spacing=8)
self.widget('viewport_preview').add(self.vbox_preview)
self.vbox_preview.show()
for page in self.pages:
preview = Preview()
self.vbox_preview.pack_start(preview.frame, False, False, 10)
self.previews.append(preview)
while Gtk.events_pending():
Gtk.main_iteration()
self.draw_preview(None)
return True
Then the function drawing the previews. This is really just a wrapper for the next function, and I needed this only because if I delete one entry in the previews, then I have to handle that case. I believe, the while loop at the end of this function is not necessary, for it will be at the end of the next one anyway.
def draw_preview(self, counter=None):
if counter is not None:
self.vbox_preview.remove(self.previews[counter].frame)
self.previews.pop(counter)
self.pages.pop(counter)
self.vbox_preview.show()
while Gtk.events_pending():
Gtk.main_iteration()
for i in range(len(self.pages)):
self.draw_note(self.previews[i].da, self.previews[i].surface, self.pages[i])
while Gtk.events_pending():
Gtk.main_iteration()
Finally, the drawing function itself:
def draw_note(self, widget, surface, page):
list_pos = '%d/%d'%(self.page + 1, len(self.pages))
self.widget('label_status').set_text(list_pos)
cairo_ctx = cairo.Context(surface)
cairo_ctx.set_source_rgb(page.background[0], page.background[1], page.background[2])
cairo_ctx.paint()
width, height = widget.get_size_request()
xmin, xmax, ymin, ymax = fujitsu.page_size(page)
factor = min(height / (2.0 * self.margin + ymax - ymin), width / (2.0 * self.margin + xmax - xmin))
factor *= 0.8
page.scale = factor
value = self.widget('adjustment_smooth').get_value()
#print value
for pen in page.pagecontent:
x = self.margin + pen.path[0][0] - xmin
y = self.margin + pen.path[0][1] - ymin
cairo_ctx.move_to(x * factor, y * factor)
if self.widget('checkbutton_smooth').get_active() == False:
[cairo_ctx.line_to((self.margin + x - xmin) * factor,
(self.margin + y - ymin) * factor) for x, y in pen.path]
else:
bezier_curve = bezier.expand_coords(pen.path, value)
x = self.margin + bezier_curve[0][0][0] - xmin
y = self.margin + bezier_curve[0][0][1] - ymin
cairo_ctx.move_to(x * factor, y * factor)
[cairo_ctx.curve_to((self.margin + control[1][0] - xmin) * factor,
(self.margin + control[1][1] - ymin) * factor,
(self.margin + control[2][0] - xmin) * factor,
(self.margin + control[2][1] - ymin) * factor,
(self.margin + control[3][0] - xmin) * factor,
(self.margin + control[3][1] - ymin) * factor)
for control in bezier_curve]
cairo_ctx.set_line_width(pen.thickness * self.zoom_factor)
cairo_ctx.set_source_rgba(pen.colour[0], pen.colour[1], pen.colour[2], pen.colour[3])
cairo_ctx.stroke()
cairo_ctx.rectangle(0, height * 0.96, width, height)
cairo_ctx.set_source_rgba(page.banner_text[0][0], page.banner_text[0][1], page.banner_text[0][2], page.banner_text[0][3])
cairo_ctx.fill()
cairo_ctx.move_to(width * 0.05, height * 0.99)
cairo_ctx.show_text(self.filename + ' ' + list_pos)
cairo_ctx.set_font_size(self.zoom_factor * 10.0)
xbearing, ybearing, twidth, theight, xadvance, yadvance = (cairo_ctx.text_extents(page.banner_text[3]))
cairo_ctx.move_to(width - 1.03 * twidth, height * 0.99)
cairo_ctx.show_text(page.banner_text[3])
cairo_ctx.set_source_rgba(0, 0, 0.9, 0.90)
cairo_ctx.stroke()
rect = widget.get_allocation()
widget.get_window().invalidate_rect(rect, False)
while Gtk.events_pending():
Gtk.main_iteration()
I think that's about it.

You could use gtk_widget_queue_draw_area or gdk_window_invalidate_rect.This will mark the widget (or rectangle) as dirty and once the main loop is idle expose event will be received where in you can redraw. From you description it appears the updates are happening on expose event so these APIs might be of use. Also you can check this sample from the cairo site where in you can see the usage of gtk_widget_queue_draw_area.
I have not used pygtk but from Google I found that the corresponding call for gtk_widget_queue_draw_area is gtk.Widget.queue_draw_area & for gdk_window_invalidate_rect is gtk.gdk.Window.invalidate_rect
Hope this helps!

Related

Changing the child's local matrix so as to offset the scaling of the parent local matrix

A <-- B <-- C - are chain of local matrices.
If I scale the localA, it also affects to the worldB. I want to warldB stays unscaled.
The problem is that I can only edit the localB matrix and nothing else. Thus, 'localB' must be "unscaled" before its world coordinate calculated.
Pseudocode
scale = 1 to 5
localA = mat4.idt() * R(Y, 45) * S(1, scale, scale) * T(0, 0, 0)
localB = mat4.idt() * R(Y, 45) * S(1, 1, 1) * T(1, 0, 0) // I can only edit this
localC = mat4.idt() * R(Y, 45) * S(1, 1, 1) * T(1, 0, 0)
worldA = localA
worldB = worldA * localB
worldC = worldB * localC
Rest state. No extra scaling, just local to world conversion
localA scaled and it scales children
And this is what I want it to look like

Mouse movement angle in openFrameworks

I am currently in the process of creating a sort of drawing program in openFrameworks that needs to calculate the angle of mouse movement. The reason for this is that the program needs to be able to draw brush strokes similar to the way photoshop does it.
I've been able to get it to work in a very jaggy way. I've placed my code in the MouseDragged event in openFrameworks, but the calculated angle is extremely jaggy and not smooth in anyway. It needs to be smooth in order for the drawing part to look good.
void testApp::mouseMoved(int x, int y ){
dxX = x - oldX;
dxY = y - oldY;
movementAngle = (atan2(dxY, dxX) * 180.0 / PI);
double movementAngleRad;
movementAngleRad = movementAngle * TO_RADIANS;
if (movementAngle < 0) {
movementAngle += 360;
}
testString = "X: " + ofToString(dxX) + " ,";
testString += "Y: " + ofToString(dxY) + " ,";
testString += "movementAngle: " + ofToString(movementAngle);
oldX = x;
oldY = y;
}
I've tried different ways of optimizing the code to work smooth but alas without results.
If you sit with a brilliant idea on how this could be fixed or optimized, I will be very grateful.
I solved it to some degree by using an ofPolyline object.
The following code shows how it works.
void testApp::mouseMoved(int x, int y ){
float angleRad;
if (movement.size() > 4)
{ angleRad = atan2(movement[movement.size()-4].y - y, movement[movement.size()-4].x -x);}
movementAngle = (angleRad * 180 / PI) + 180;
movement.addVertex(x,y,0);
}
As seen in the code I'm using the point recorded 4 steps back to increase the smoothness of the angle. This works if the mouse is moved in stroke like movements. If the mouse is moved slow, jaggyness will still occur.

Implementation of gluLookAt and gluPerspective

I've written a small 2D engine in opengl in the process of making a game. I'm using OpenGL ES 2 and the code compiles and runs on iOS and Mac OSX.
Now I'm extending it to support 3D and I'm having a problem setting up the camera.
I've checked the code a hundred times and I can't finde where the problem is, so maybe someone with experience on this can give an idea.
This is the code I have: I'm posting the part of the code where I think the problem might be, but if something else is needed just ask me.
Matrix4 _getFrustumMatrix(float left, float right, float bottom, float top, float near, float far){
Matrix4 res = Matrix4(2.0 * near / (right - left), 0, 0, 0,
0, 2.0 * near / (top - bottom), 0, 0,
(right + left) / (right - left), (top + bottom) / (top - bottom), -(far + near) / (far - near), -1.0,
0,0, -2.0 * far * near / (far - near), 0);
return res;
}
Matrix4 _getPerspectiveMatrix(float near, float far, float angleOfView){
static float aspectRatio = float(SCREENW)/float(SCREENH);
float top = near * tan(angleOfView * 3.1415927 / 360.0);
float bottom = -top;
float left = bottom * aspectRatio;
float right = top * aspectRatio;
return _getFrustumMatrix(left, right, bottom, top, near, far);
}
Matrix4 _getLookAtMatrix(Vector3 eye, Vector3 at, Vector3 up){
Vector3 forward, side;
forward = at - eye;
forward.normalize();
side = forward ^ up;
side.normalize();
up = side ^ forward;
Matrix4 res = Matrix4(side.x, up.x, -forward.x, 0,
side.y, up.y, -forward.y, 0,
side.z, up.z, -forward.z, 0,
0, 0, 0, 1);
res.translate(Vector3(0 - eye));
return res;
}
void Scene3D::_deepRender(){
cameraEye = Vector3(10,0,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
MatrixStack::push();
Matrix4 projection = _getPerspectiveMatrix(1, 100, 45);
Matrix4 view = _getLookAtMatrix(cameraEye, cameraAt, cameraUp);
MatrixStack::set(projection * view);
Space3D::_deepRender();
MatrixStack::pop();
}
The drawn object is a representation of the axes where x=red, y=green, z=blue, and it's located at (0,0,0).
If I put the eye at (0,0,40) everything looks as expected:
If I put the eye at (10,0,40) then the object is not drawn in the middle of the screen as it should be.
This is the Matrix4::translate method:
void Matrix4::translate(const Vector3& v) {
a14 += a11 * v.x + a12 * v.y + a13 * v.z;
a24 += a21 * v.x + a22 * v.y + a23 * v.z;
a34 += a31 * v.x + a32 * v.y + a33 * v.z;
a44 += a41 * v.x + a42 * v.y + a43 * v.z;
}
EDIT: To add some information:
Using _getLookAtMatrix() with this parameters:
cameraEye = Vector3(40,40,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
Should give me an equivalent matrix to this one?
Matrix4 view;
view.setIdentity();
view.translate(Vector3(0,0,-69.2820323)); // 69.2820323 is the length of Vector3(40,40,40)
view.rotate(45, Vector3(1,0,0));
view.rotate(-45, Vector3(0,1,0));
At least those transformations makes sense to me and the resulting image looks as what I should expect.
But this matrix compared to the one I get using _getLookAtMatrix() are very different:
view:
0.707106769, -0.49999997, 0.49999997, 0,
0, 0.707106769, 0.707106769, 0,
-0.707106769, -0.49999997, 0.49999997, 0,
0, 0, -69.2820358, 1
_getLookAtMatrix(cameraEye, cameraAt, cameraUp):
0.707106769, 0, -0.707106769, 0,
-0.408248276, 0.816496551, -0.408248276, 0,
0.577350259, 0.577350259, 0.577350259, 0,
-35.0483475, -55.7538719, 21.520195, 1
You seem to have some serious ordering inconsistencies in your matrix class.
For example I assumed your Matrix4 constructor takes it arguments (the matrix elements) as column-major, otherwise your functions wouldn't match the reference implementations of glFrustum and gluLookAt and you would get completely screwed results.
And the code of your translate function also looks correct, since it has to modify the last column of the matrix, which are the elements (a14, a24, a34 and a44).
But the your print out of the view matrix suggests that translate actually modifies the last row, unless you print the matrix in column-major format and therefore transposed. But in this case the print of the _getLookAtMatrix suggests that the Matrix4 constructor takes its arguments in row-major order, which indeed invalidates other things.
Of course all this is also depending on how you send the matrices to OpenGL and how you use them in the vertex shader (I assume ES 2.0, otherwise there would be no need for your own matrix library). If you indeed use ES 1 then you need to send the matrix elements to OpenGL in column-major order, but the translation has to be in the last column and not the last row.
But no matter what convention you use, there is definitely a severe inconsistency inside your matrix code. But without seeing the whole Matrix4 class, the vertex shader and the code where you upload the matrices to OpenGL, it is hard to tell where this inconsistency is.

Determining what frequencies correspond to the x axis in aurioTouch sample application

I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels).
In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code:
if (displayMode == aurioTouchDisplayModeOscilloscopeFFT)
{
if (fftBufferManager->HasNewAudioData())
{
if (fftBufferManager->ComputeFFT(l_fftData))
[self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2];
else
hasNewFFTData = NO;
}
if (hasNewFFTData)
{
int y, maxY;
maxY = drawBufferLen;
for (y=0; y<maxY; y++)
{
CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1);
CGFloat fftIdx = yFract * ((CGFloat)fftLength);
double fftIdx_i, fftIdx_f;
fftIdx_f = modf(fftIdx, &fftIdx_i);
SInt8 fft_l, fft_r;
CGFloat fft_l_fl, fft_r_fl;
CGFloat interpVal;
fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24;
fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24;
fft_l_fl = (CGFloat)(fft_l + 80) / 64.;
fft_r_fl = (CGFloat)(fft_r + 80) / 64.;
interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f;
interpVal = CLAMP(0., interpVal, 1.);
drawBuffers[0][y] = (interpVal * 120);
}
cycleOscilloscopeLines();
}
}
From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop.
For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following:
if (yValueRepresentskHz(y, 6))
NSLog(#"The magnitude for 6kHz is %f", (interpVal * 120));
Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.
I believe that the frequency of each bin it is using is given by
yFract * hwSampleRate * .5
I'm fairly certain that you need the .5 because yFract is a fraction of the total fftLength and the last bin of the FFT corresponds with half of the sampling rate. Thus, you could do something like
NSLog(#"The magnitude for %f Hz is %f.", (yFract * hwSampleRate * .5), (interpVal * 120));
Hopefully that helps to point you in the right direction at least.

Just in theory: How is the alpha component premultiplied into the other components of an PNG in iPhone OS, and how can it be unpremultiplied properly?

Actually, I thought that there would be an easy way to achieve that. What I need is pure alpha value information. For testing, I have a 50 x 55 px PNG, where on every edge a 5x5 pixel rectangle is fully transparent. In these areas alpha has to be 0.Everywhere else it has to be 255. I made very sure that my PNG is created correctly, and it also looks correctly.
Please tell me if this is theoretically correct:
I created an CGImageRef that has only the alpha channel and nothing else. This is done with CGBitmapContextCreate and kCGImageAlphaOnly as param.
CGBitmapContextGetBitsPerPixel(context) returns me 1, so it indicates me that I really have only one component per pixel: The desired alpha value.
I've been reading that CGBitmapContextCreate will handle all the conversion from the given image to the new created context. My image was previously PNG-24 with transparency, but pngcrunch from Xcode seems to convert them somehow.
So, just in theory: Do I have any chance to get to the correct, unpremultiplied alpha at this point? The values I get seem to almost match, but in a big 5x5 transparent square I get values like 19, 197, 210, 0, 0, 0, 98 and so on. If they were true, I would have to see something from the image. The image itself is solid blue.
Premultiplication doesn't affect the alpha channel, it affects the color channels.
The formula for raster compositing (putting one raster image over another) is:
dst.r = src.r * src.a + dst.r * (1.0 - src.a);
dst.g = src.g * src.a + dst.g * (1.0 - src.a);
dst.b = src.b * src.a + dst.b * (1.0 - src.a);
Premultiplication cuts out the first multiplication expression:
dst.r = src.r′ + dst.r * (1.0 - src.a);
dst.g = src.g′ + dst.g * (1.0 - src.a);
dst.b = src.b′ + dst.b * (1.0 - src.a);
This works because the source color components are already multiplied by the alpha component—hence the name “premultiplied”. It doesn't need to multiply them now, because it already has the results.
unpremultiplied alpha
The alpha component itself is never premultiplied: What would you multiply it by? The color components are premultiplied by the alpha.
Since premultiplying color values is a simple as:
r = (r * a) / 255;
g = (g * a) / 255;
b = (b * a) / 255;
Getting the inverse would be:
if (a > 0) {
r = (r * 255) / a;
g = (g * 255) / a;
b = (b * 255) / a;
}
This formular is not correct. The goal is to find unmultiplexed (r, g, b) that would latter result to the same multiplexed values (it is not possible to find the original r, g, b values, though.
However with the formular above we find for the example of
alpha = 100
r_premulti = 1
a reconstructed r = 2;
Latter if this r is multiplexed again we find 2 * 100 / 255 = 0 but we wanted r_premulti == 1 instead!!
The correct formular needs to round up. Example for r-component:
reconstruced r = ceiling(r_premulti * 255 / alpha)