Scribble programming in gtk3 - gtk

How to use gtk-3 to build a scribble program.I found example in gtk-3 official website ,but while drawing it uses the cairo_rectangle to draw the user input. it very very slow compare to gtk_draw_lines()in gtk-2. Cairo funcions cannot be able to capture data pixel by pixel.
what i want , is there any function in gtk-3 to draw very faster ,able to capture (x,y) point and draw that point by pixel to pixel in my draw area?

The documentation for gdk_draw_lines tells it's deprecated, because all the drawing has been delegated to cairo a long time ago. The documentation tells that you may use cairo_line_to to connect your points, and cairo_stroke to draw a line between those points.
ADDENDUM:
Cairo is a vector graphics library: by design and on purpose, it's not designed to do pixel per pixel access. However you may trick it by changing your transformation matrix so that it reflects your pixel coordinates. Give a look at the CTM (Current Transformation Matrix) modification functions, in particular cairo_scale. You may catch the configure-event of your GtkDrawingArea to be notified when its size changes and have a chance to modify the CTM accordingly.

Related

How to plot vector (not rastered as pixels) graphics in opencv

Being used to Matlab and its great capabilities of drawing vector graphics, I am looking for something similar in OpenCV. OpenCV drawing functions seem to raster the lines or points at pixel level. Currently, I am dumping the data into text, copy-paste to Matlab and doing all the plots. I also thought about using Matlab engine to pass it the parameters and running plots, but it seems to be too much mess for simple debug operation.
I want to be able to do the following:
Zoom in, out of the image
Draw a line/point which is re-rastered each time I do zoom, like in Matlab.
Currently, I found image watch plugin to take care of zooming, but it does not help with the second part.
Any idea?
OpenCV has a lot of capabilities to process an image but only minimal ones for displaying the result. It has nothing that can display vector graphics like Matlab. When I need to see polygons on image (or just polygons) I am dumping them to file and using third party viewer (usually Giv viewer).

Drawing image stamps along a path with Cairo

As part of my initial research, to see if using Cairo is a good fit for us, I'm looking to see if I can obtain an (x,y) point at a given distance from the start of a path. I have looked over the Cairo examples and APIs but I haven't found anything to suggest this is possible. It would be a pain if we had to build our own Bezier path implementation from scratch.
On android there is a class called PathMeasure. This allows getting an (x,y) point at a given distance from the start of the path. This allows me to draw a stamp easily at the wanted distance being able to produce something like the image below.
Hopefully someone can point me in the right direction.
Unless I have an uncomplete understanding of what you mean with "path", it seems that you can accomplish the task by starting from this guide. You would use multiple cr (image location, rotation, scale) and just one image instance.
From what I can understand from your image, you'll need to use blending (e.g. the alpha channel), I would say setting pixel by pixel the alpha channel (transparency) proportional to/same as your original grayscale values, and all the R G B pixels values to black (0).
For acting directly on your input image (on the file you will be loading), by googling "convert grayscale image to alpha" I found several results for Photoshop, some for gimp, I don't know what would you have available.
Otherwise you will have to do directly within your code accessing the image pixels. To read/edit pixel values you can use cairo_image_surface_get_data. First you have to create a destination image with cairo_image_surface_create
using format CAIRO_FORMAT_ARGB32
Similarly, you can use cairo_mask drawing a black rectangle of the size of your image, after having created an alpha channel image of format CAIRO_FORMAT_A8 from your original image (again, accessing pixel by pixel seems the only possible way given the limitations of cairo_image_surface_create_from_png).
Using cairo_paint_with_alpha in place of cairo_paint is not suitable because the alpha channel would be constant for the whole image.

Is there any open source sdk like cam scanner in iphone sdk [duplicate]

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...

Cropping UIImage like camscanner

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?