How can I add touch to a watch face (EFL) - touch

I am trying to add touch events to my watch face that I am developing with the EFL libraries. But the touch layer is either not working or, when it is, it is fully covering my watch face, although it should be transparent.
My code that creates the watch window is:
Evas_Object *win = NULL;
int ret = EINA_FALSE;
/*
* Get watch window object
*/
ret = watch_app_get_elm_win(&win);
if (ret != APP_ERROR_NONE) {
dlog_print(DLOG_ERROR, LOG_TAG, "failed to get watch window. err = %d", ret);
return NULL;
}
evas_object_resize(win, width, height);
/* <Variant A or B here> */
evas_object_show(win);
Then I try to create a transparent gesture layer and touch callbacks (like described on https://docs.tizen.org/application/native/guides/ui/efl/touch-gesture). I have tried two variants.
Variant A - I use the same window that I created for the watch (win). No errors, but it doesn't work, no touch events:
/* Variant A */
/* Gesture layer transparent object */
Evas_Object *r;
/* Gesture layer object */
Evas_Object *g;
win = elm_win_util_standard_add("gesture_layer", "Gesture Layer");
elm_win_autodel_set(win, EINA_TRUE);
/* Gesture layer transparent object */
r = evas_object_rectangle_add(evas_object_evas_get(win));
evas_object_move(r, 0, 0);
evas_object_color_set(r, 0, 0, 255, 128);
elm_win_resize_object_add(win, r);
/* Gesture layer object */
g = elm_gesture_layer_add(win);
elm_gesture_layer_attach(g, r);
evas_object_show(r);
/* Set callbacks */
elm_gesture_layer_cb_set(g, ELM_GESTURE_N_TAPS, ELM_GESTURE_STATE_START, n_finger_tap_start, NULL);
Variant B - I use a separate object winTouch for the touch layer. It works, but the layer appears fully opaque and the watch face cannot be seen:
/* Variant B */
Evas_Object *winTouch;
/* Gesture layer transparent object */
Evas_Object *r;
/* Gesture layer object */
Evas_Object *g;
winTouch = elm_win_util_standard_add("gesture_layer", "Gesture Layer");
elm_win_autodel_set(winTouch, EINA_TRUE);
/* Gesture layer transparent object */
r = evas_object_rectangle_add(evas_object_evas_get(winTouch));
evas_object_move(r, 0, 0);
evas_object_color_set(r, 0, 0, 255, 0);
elm_win_resize_object_add(winTouch, r);
/* Gesture layer object */
g = elm_gesture_layer_add(winTouch);
elm_gesture_layer_attach(g, r);
evas_object_show(r);
evas_object_show(winTouch); // <--- Without this line, Variant B behaves like Variant A
/* Set callbacks */
elm_gesture_layer_cb_set(g, ELM_GESTURE_N_TAPS, ELM_GESTURE_STATE_START, n_finger_tap_start, NULL);
// and other callbacks
What am I doing wrong?
My project is based on the SDK sample "Chronograph Watch". There are actually 4 different Evas_Object's:
evas_object_resize(win, width, height);
evas_object_resize(bg, width, height);
evas_object_resize(chronograph_layout, DIAM_SCREEN, DIAM_SCREEN);
evas_object_resize(parts, size_w, size_h);
Where should I add the gesture layer? When I add it on 1, Variant A does nothing, Variant B works but covers the watch face. If I do it on one of the others, both Variants work but cover the watch face. I don't understand why, since I am specifying a transparent color.
EDIT: One new thought: could it be that the touch gesture layer does only work on apps, not on watch faces?

You don't need to create a new window for gesture layer.
you got a window from API. then did you create a new window using elm_win_add API?
My below sample code works well. window was transprent.
Plz see you code carefully and if you don't find the wrong things.
why don't you share your full code. i will look around.
static Evas_Event_Flags
n_finger_tap_start(void *data , void *event_info)
{
printf("tap start\n");
return EVAS_EVENT_FLAG_ON_HOLD;
}
void
test_gesture_layer(void *data EINA_UNUSED, Evas_Object *obj EINA_UNUSED,
void *event_info EINA_UNUSED)
{
Evas_Coord w, h;
Evas_Object *win;
w = 480;
h = 800;
win = elm_win_add(NULL, "gesture-layer", ELM_WIN_BASIC);
elm_win_title_set(win, "Gesture Layer");
elm_win_autodel_set(win, EINA_TRUE);
evas_object_resize(win, w, h);
Evas_Object *r, *g;
r = evas_object_rectangle_add(evas_object_evas_get(win));
evas_object_move(r, 0, 0);
evas_object_color_set(r, 0, 0, 0, 0);
elm_win_resize_object_add(win, r);
g = elm_gesture_layer_add(win);
elm_gesture_layer_attach(g, r);
evas_object_show(r);
elm_gesture_layer_cb_set(g, ELM_GESTURE_N_TAPS,
ELM_GESTURE_STATE_START, n_finger_tap_start, NULL);
evas_object_show(win);
}
anyway i recommend you add a touch event callback on window.
refer https://docs.tizen.org/iot/api/5.0/tizen-iot-headed/Example_Evas_Images_2.html
this page find EVAS_CALLBACK_MOUSE_DOWN, EVAS_CALLBACK_MOUSE_UP.

The widget tree of chronographwatch App is
Win -> Bg -> Layout -> Images
This sample app's top view is Layout(main.edc), So you should attach gesture layer on that.
Evas_Object *gesture;
gesture = elm_gesture_layer_add(chronograph_layout);
elm_gesture_layer_attach(gesture, chronograph_layout);
elm_gesture_layer_cb_set(gesture, ELM_GESTURE_N_TAPS,
ELM_GESTURE_STATE_START, n_finger_tap_start, NULL);
elm_gesture_layer_cb_set(gesture, ELM_GESTURE_N_TAPS,
ELM_GESTURE_STATE_END, n_finger_tap_end, NULL);
I can get an event properly.
If you want to use system defined gestures such as flick, zoom, rotate, long tap
you need to use gesture layer.
but if you just want to get touch event on the object. just use EVAS_CALLBACK_MOUSE_XXX.
Hope you got a solution.

Related

ImGui overlay with UpdateLayeredWindow function

As title, I'm trying to create a partially transparent in-game overlay using ImGui that's clickable on the UI but click-through otherwise, i.e. you can click on the ImGUI elements but outside the elements you can interact with the game.
I was able to do it using
https://github.com/ocornut/imgui/blob/master/examples/example_win32_directx11/main.cpp
by making the window style as WS_EX_TOPMOST | WS_EX_LAYERED, and using
SetLayeredWindowAttributes(hwnd, RGB(0, 0, 0), 0, ULW_COLORKEY);
However this would impact the rendering performance of the underlying game.
So, I decided to try the UpdateLayeredWindow function.
Here is one of the templates I referenced :
https://github.com/riley-x/TransparentWindow
This does not impact the performance and worked perfectly, and now I have to integrate ImGUI rendering to replace the green ellipse.
However, ImGUI uses the following code :
g_pd3dDeviceContext->OMSetRenderTargets(1, &g_mainRenderTargetView, NULL);
g_pd3dDeviceContext->ClearRenderTargetView(g_mainRenderTargetView, clear_color_with_alpha);
But in order to use the
BOOL UpdateLayeredWindow(
HWND hWnd,
HDC hdcDst,
POINT *pptDst,
SIZE *psize,
HDC hdcSrc,
POINT *pptSrc,
COLORREF crKey,
BLENDFUNCTION *pblend,
DWORD dwFlags
);
function, I have to tell ImGUI to render to an HDC.
I've thought of some possible solutions,
I'm not sure how g_mainRenderTargetView can possibly bind to an HDC, unlike ID2D1Factory::CreateDCRenderTarget which can actually bind to an HDC with BindDC.
Or I could let the ImGUI render as usual but retrieve the back buffer as a bitmap, and then do the following
HDC hdcWnd = GetDC(hwnd);
HDC hdcMem = CreateCompatibleDC(hdcWnd);
HBITMAP memBitmap = CreateCompatibleBitmap(hdcWnd, rect.right - rect.left, rect.bottom - rect.top);
SelectObject(hdcMem, memBitmap);
m_pRenderTarget->BindDC(hdcMem, &rect);
m_pRenderTarget->BeginDraw();
m_pRenderTarget->Clear({ 0 });
m_pRenderTarget->DrawBitmap(bitmap.Get()); // ???
m_pRenderTarget->EndDraw();
POINT pt0 = { 0 };
SIZE sz = { rect.right - rect.left, rect.bottom - rect.top };
BLENDFUNCTION bfunc = { 0 };
bfunc.AlphaFormat = AC_SRC_ALPHA;
bfunc.BlendFlags = 0;
bfunc.BlendOp = AC_SRC_OVER;
bfunc.SourceConstantAlpha = 255;
UpdateLayeredWindow(hwnd, hdcWnd, &pt0, &sz, hdcMem, &pt0, 0, &bfunc, ULW_COLORKEY);
DeleteObject(memBitmap);
ReleaseDC(hwnd, hdcWnd);
ReleaseDC(0, hdcMem);
where m_pRenderTarget is made from :
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_DEFAULT,
D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_PREMULTIPLIED),
0,
0,
D2D1_RENDER_TARGET_USAGE_NONE,
D2D1_FEATURE_LEVEL_10
);
d2Factory->CreateDCRenderTarget(&props, m_pRenderTarget.GetAddressOf());
The problem is when I tried to retrieve the backbuffer bitmap, it always failed.
Or maybe there is some other way to make the overlay work without using UpdateLayeredWindow?

How to change the zoom centerpoint in an ILNumerics scene viewed with a camera

I would like to be able to zoom into an ILNumerics scene viewed by a camera (as in scene.Camera) with the center point of the zoom determined by where the mouse pointer is located when I start spinning the mouse scroll wheel. The default zoom behavior is for the zoom center to be at the scene.Camera.LookAt point. So I guess this would require the mouse to be tracked in (X,Y) continuously and for that point to be used as the new LookAt point? This seems to be like this post on getting the 3D coordinates from a mouse click, but in my case there's no click to indicate the location of the mouse.
Tips would be greatly appreciated!
BTW, this kind of zoom method is standard operating procedure in CAD software to zoom in and out on an assembly of parts. It's super convenient for the user.
One approach is to overload the MouseWheel event handler. The current coordinates of the mouse are available here, too.
Use the mouse screen coordinates to acquire (to "pick") the world
coordinate corresponding to the primitive under the mouse.
Adjust the Camera.Position and Camera.ZoomFactor to 'move' the camera closer to the point under the mouse and to achieve the required 'directional zoom' effect.
Here is a complete example from the ILNumerics website:
using System;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
using static ILNumerics.Globals;
using static ILNumerics.ILMath;
namespace ILNumerics.Examples.DirectionalZoom {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void panel2_Load(object sender, EventArgs e) {
Array<float> X = 0, Y = 0, Z = CreateData(X, Y);
var surface = new Surface(Z, X, Y, colormap: Colormaps.Winter);
surface.UseLighting = true;
surface.Wireframe.Visible = false;
panel2.Scene.Camera.Add(surface);
// setup mouse handlers
panel2.Scene.Camera.Projection = Projection.Orthographic;
panel2.Scene.Camera.MouseDoubleClick += Camera_MouseDoubleClick;
panel2.Scene.Camera.MouseWheel += Camera_MouseWheel;
// initial zoom all
ShowAll(panel2.Scene.Camera);
}
private void Camera_MouseWheel(object sender, Drawing.MouseEventArgs e) {
// Update: added comments.
// the next conditionals help to sort out some calls not needed. Helpful for performance.
if (!e.DirectionUp) return;
if (!(e.Target is Triangles)) return;
// make sure to start with the SceneSyncRoot - the copy of the scene which receives
// user interaction and is eventually used for rendering. See: https://ilnumerics.net/scene-management.html
var cam = panel2.SceneSyncRoot.First<Camera>();
if (Equals(cam, null)) return; // TODO: error handling. (Should not happen in regular setup, though.)
// in case the user has configured limited interaction
if (!cam.AllowZoom) return;
if (!cam.AllowPan) return; // this kind of directional zoom "comprises" a pan operation, to some extent.
// find mouse coordinates. Works only if mouse is over a Triangles shape (surfaces, but not wireframes):
using (var pick = panel2.PickPrimitiveAt(e.Target as Drawable, e.Location)) {
if (pick.NextVertex.IsEmpty) return;
// acquire the target vertex coordinates (world coordinates) of the mouse
Array<float> vert = pick.VerticesWorld[pick.NextVertex[0], r(0, 2), 0];
// and transform them into a Vector3 for easier computations
var vertVec = new Vector3(vert.GetValue(0), vert.GetValue(1), vert.GetValue(2));
// perform zoom: we move the camera closer to the target
float scale = Math.Sign(e.Delta) * (e.ShiftPressed ? 0.01f : 0.2f); // adjust for faster / slower zoom
var offs = (cam.Position - vertVec) * scale; // direction on the line cam.Position -> target vertex
cam.Position += offs; // move the camera on that line
cam.LookAt += offs; // keep the camera orientation
cam.ZoomFactor *= (1 + scale);
// TODO: consider adding: the lookat point now moved away from the center / the surface due to our zoom.
// In order for better rotations it makes sense to place the lookat point back to the surface,
// by adjusting cam.LookAt appropriately. Otherwise, one could use cam.RotationCenter.
e.Cancel = true; // don't execute common mouse wheel handlers
e.Refresh = true; // immediate redraw at the end of event handling
}
}
private void Camera_MouseDoubleClick(object sender, Drawing.MouseEventArgs e) {
var cam = panel2.Scene.Camera;
ShowAll(cam);
e.Cancel = true;
e.Refresh = true;
}
// Some sample data. Replace this with your own data!
private static RetArray<float> CreateData(OutArray<float> Xout, OutArray<float> Yout) {
using (Scope.Enter()) {
Array<float> x_ = linspace<float>(0, 20, 100);
Array<float> y_ = linspace<float>(0, 18, 80);
Array<float> Y = 1, X = meshgrid(x_, y_, Y);
Array<float> Z = abs(sin(sin(X) + cos(Y))) + .01f * abs(sin(X * Y));
if (!isnull(Xout)) {
Xout.a = X;
}
if (!isnull(Yout)) {
Yout.a = Y;
}
return -Z;
}
}
// See: https://ilnumerics.net/examples.php?exid=7b0b4173d8f0125186aaa19ee8e09d2d
public static double ShowAll(Camera cam) {
// Update: adjusts the camera Position too.
// this example works only with orthographic projection. You will need to take the view frustum
// into account, if you want to make this method work with perspective projection also. however,
// the general functioning would be similar....
if (cam.Projection != Projection.Orthographic) {
throw new NotImplementedException();
}
// get the overall extend of the cameras scene content
var limits = cam.GetLimits();
// take the maximum of width/ height
var maxExt = limits.HeightF > limits.WidthF ? limits.HeightF : limits.WidthF;
// make sure the camera looks at the unrotated bounding box
cam.Reset();
// center the camera view
cam.LookAt = limits.CenterF;
cam.Position = cam.LookAt + Vector3.UnitZ * 10;
// apply the zoom factor: the zoom factor will scale the 'left', 'top', 'bottom', 'right' limits
// of the view. In order to fit exactly, we must take the "radius"
cam.ZoomFactor = maxExt * .50;
return cam.ZoomFactor;
}
}
}
Note, that the new handler performs the directional zoom only when the mouse is located over an object hold by this Camera! If, instead, the mouse is placed on the background of the scene or over some other Camera / plot cube object no effect will be visible and the common zoom feature is performed (zooming in/out to the look-at point).

How to change selection shape marker in "gdk_pixbuf_composite" with gtk c?

I use this code for crop selection:
gboolean mouse_press_callback(GtkWidget *event_box,
GdkEventButton *event,
gpointer data)
{
if (img1buffer == NULL)
return TRUE;
static gint press_x = 0, press_y = 0, rel_x = 0, rel_y = 0;
GtkAllocation ebox;
gint img1_x_offset = 0, img1_y_offset = 0;
gtk_widget_get_allocation(event_box, &ebox);
img1_x_offset = (ebox.width - width) / 2;
img1_y_offset = (ebox.height - height) / 2;
if (event->type == GDK_BUTTON_PRESS)
{
press_x = event->x - img1_x_offset;
press_y = event->y - img1_y_offset;
//g_print ("Event box clicked at coordinates %f,%f\n",
//event->x - img1_x_offset, event->y - img1_y_offset);
}
else if (event->type == GDK_BUTTON_RELEASE)
{
rel_x = event->x - img1_x_offset;
rel_y = event->y - img1_y_offset;
//g_print ("Event box released at coordinates %f,%f\n",
//event->x - img1_x_offset, event->y - img1_y_offset);
dest_x = rel_x < press_x ? rel_x : press_x;
dest_y = rel_y < press_y ? rel_y : press_y;
dest_width = abs(rel_x - press_x);
dest_height = abs(rel_y - press_y);
// mark user selection in image
GdkPixbuf *img1buffer_resized = gdk_pixbuf_scale_simple(img1buffer, width, height, GDK_INTERP_TILES);
gdk_pixbuf_composite(croppic, img1buffer_resized, dest_x, dest_y, dest_width, dest_height, 0, 0, 1, 1, GDK_INTERP_TILES, 170);
gtk_image_set_from_pixbuf(GTK_IMAGE(img1), img1buffer_resized);
}
return TRUE;
}
that in main function:
croppic = gdk_pixbuf_new_from_file("E:/Works for Gov Project/DOC/GUI/logogui1/crop_bg.png", NULL);
img1 = gtk_image_new();
event_box = gtk_event_box_new();
gtk_event_box_set_visible_window(GTK_EVENT_BOX(event_box), FALSE);
gtk_container_add(GTK_CONTAINER(event_box), img1);
gtk_container_add(GTK_CONTAINER(frame1), event_box);
g_signal_connect(G_OBJECT(event_box), "button_press_event", G_CALLBACK(mouse_press_callback), NULL);
g_signal_connect(G_OBJECT(event_box), "button-release-event", G_CALLBACK(mouse_press_callback), NULL);
and "crop_bg.png" is:
But I want to a selection shape similar to in paint software:
What ideas on how to solve this task would you suggest? Or on what resource on the internet can I find help?
You are trying to draw on top of a GtkImage. This isn't the best way to do custom drawing, and as you've noticed, gdk_pixbuf_composite() is rather limited.
Instead, you'll want to do drawing the proper way, using the ::draw signal and cairo. cairo is the vector graphics library that GTK+ 3 uses to draw its own widgets, and the ::draw signal gives you a cairo context to draw with:
gboolean draw(GtkWidget *widget, cairo_t *cr, void *data);
cairo itself is easy to use; it's well documented and has a whole bunch of samples. What you want to do for your purposes is make a dashed stroked rectangle.
In addition, instead of using a GtkImage, you'll want to use GtkDrawingArea. GtkDrawingArea is specifically designed to be drawn on, and with a little extra work can be made to handle your events.
The last piece of the puzzle, then, is how do you start drawing when the mouse events come in? You don't get a cairo_t in button-press-event or button-release-event, so you can't draw there. Indeed, GTK+ is optimized so that it only draws when necessary. To indicate that it's necessary to draw, you can use the gtk_widget_queue_draw() method. There are variations on this method that mark only a subset of the widget to be redrawn (you can get this subset back with cairo_clip_extents() from within the ::draw handler).
Remember that widget coordinates are floating-point; so are cairo coordinates.
Let's demonstrate. I'm going to use global variables for this; you probably don't want to.
gdouble x0, y0;
gdouble x1, y1;
These will store the endpoints of the rectangle. When we press the mouse button, we want to start the drawing:
gboolean button_press_event(GtkWidget *widget, GdkEventButton *e, gpointer data)
{
// ...
x0 = e->x;
y0 = e->y;
x1 = e->x; // start with a zero-sized rectangle
y1 = e->y;
// ...
}
When we move the mouse, we want to change x1 and y1 to the new mouse coordinates, and then update the rectangle. To optimize things, we'll only update the area that changed:
gboolean motion_notify_event(GtkWidget *widget, GdkEventMotion *e, gpointer data)
{
// ...
// first queue both old and new rectangles for drawing
gtk_widget_queue_draw_area(widget,
x0, y0,
MAX(x1, e->x), MAX(y1, e->y));
// then set the new rectangle
x1 = e->x;
y1 = e->y;
// ...
}
You can probably figure out what to do on button-release-event from the above. draw would look something like
gboolean draw(GtkWidget *widget, cairo_t *cr, gpointer data)
{
// ...
cairo_rectangle(cr, x0, y0, x1 - x0, y1 - y0);
// set up a dashed solid-color stroke
cairo_stroke(cr);
// ...
}
Good luck!

Animated GIF in SWT table/tree viewer cell

http://www.java2s.com/Code/Java/SWT-JFace-Eclipse/DisplayananimatedGIF.htm describes how to display an animated GIF in SWT - in general. While the code works and is easily comprehensible I'm facing serious issues displaying an animated GIF in a SWT/JFace table/tree viewer cell with that technique. -> all code below
Essentially, I implemented my own OwnerDrawLabelProvider which creates an ImageLoader in paint(Event, Object) and starts an animation thread. The problem seems to be that this animation thread is not the UI thread and I don't know which GC or Display instance to use in its run() method.
I tried creating a separate GC instance in the thread's constructor - derived from event.gc - but the thread fails writing to that GC as soon as I step out of the debugger...
Sat Jan 9 22:11:57 192.168.1.6.local.home java[25387] : CGContextConcatCTM: invalid context 0x0
2010-01-09 22:12:18.356 java[25387:17b03] It does not make sense to draw an image when [NSGraphicsContext currentContext] is nil. This is a programming error. Break on _NSWarnForDrawingImageWithNoCurrentContext to debug. This will be logged only once. This may break in the future.
Sat Jan 9 22:12:41 192.168.1.6.local.home java[25387] : CGContextConcatCTM: invalid context 0x0
How do I need to handle this situation?
Below are the relevant code sections:
/* Called by paint(Event, Object). */
private void paintAnimated(final Event event, final ImageLoader imageLoader) {
if (imageLoader == null || ArrayUtils.isEmpty(imageLoader.data)) {
return;
}
final Thread animateThread = new AnimationThread(event, imageLoader);
animateThread.setDaemon(true);
animateThread.start();
}
private class AnimationThread extends Thread {
private Display display;
private GC gc;
private ImageLoader imageLoader;
private Color background;
public AnimationThread(final Event event, final ImageLoader imageLoader) {
super("Animation");
this.display = event.display;
/*
* If we were to simply reference event.gc it would be reset/empty by the time it's being used
* in run().
*/
this.gc = new GC(event.gc.getDevice());
this.imageLoader = imageLoader;
this.background = getBackground(event.item, event.index);
}
#Override
public void run() {
/*
* Create an off-screen image to draw on, and fill it with the shell background.
*/
final Image offScreenImage =
new Image(this.display, this.imageLoader.logicalScreenWidth,
this.imageLoader.logicalScreenHeight);
final GC offScreenImageGC = new GC(offScreenImage);
offScreenImageGC.setBackground(this.background);
offScreenImageGC.fillRectangle(0, 0, this.imageLoader.logicalScreenWidth,
this.imageLoader.logicalScreenHeight);
Image image = null;
try {
/* Create the first image and draw it on the off-screen image. */
int imageDataIndex = 0;
ImageData imageData = this.imageLoader.data[imageDataIndex];
image = new Image(this.display, imageData);
offScreenImageGC.drawImage(image, 0, 0, imageData.width, imageData.height, imageData.x,
imageData.y, imageData.width, imageData.height);
/*
* Now loop through the images, creating and drawing each one on the off-screen image before
* drawing it on the shell.
*/
int repeatCount = this.imageLoader.repeatCount;
while (this.imageLoader.repeatCount == 0 || repeatCount > 0) {
switch (imageData.disposalMethod) {
case SWT.DM_FILL_BACKGROUND:
/* Fill with the background color before drawing. */
offScreenImageGC.setBackground(this.background);
offScreenImageGC.fillRectangle(imageData.x, imageData.y, imageData.width,
imageData.height);
break;
case SWT.DM_FILL_PREVIOUS:
// Restore the previous image before drawing.
offScreenImageGC.drawImage(image, 0, 0, imageData.width, imageData.height,
imageData.x, imageData.y, imageData.width, imageData.height);
break;
}
imageDataIndex = (imageDataIndex + 1) % this.imageLoader.data.length;
imageData = this.imageLoader.data[imageDataIndex];
image.dispose();
image = new Image(this.display, imageData);
offScreenImageGC.drawImage(image, 0, 0, imageData.width, imageData.height, imageData.x,
imageData.y, imageData.width, imageData.height);
// Draw the off-screen image.
this.gc.drawImage(offScreenImage, 0, 0);
/*
* Sleeps for the specified delay time (adding commonly-used slow-down fudge factors).
*/
try {
int ms = imageData.delayTime * 10;
if (ms
I posted the same problem to the SWT newsgroup http://www.eclipse.org/forums/index.php?t=tree&th=160398
After many hours of frustrating trial-and-error a co-worker came up with a feasible solution. My initial approaches to have this implemented in a totally self-contained LabelProvider failed miserably.
One approach that didn't work was to override LabelProvider#update() and to call timerExec(100, new Runnable() {...viewer.update()... from within that method. The "life"-cycle of that is hard to control and it uses too many CPU cycles (10% on my MacBook).
One of the colleague's ideas was to implement a custom TableEditor: a label with an image (one frame of the animated GIF) but no text. Each TableEditor instance would start its own thread in which it updates the label's image. This works quite well, but there's a separate "animation" thread for each animated icon. Also, this was a performance killer, consumed 25% CPU on my MacBook.
The final approach has three building blocks
an OwnerDrawLabelProvider which paints either a static image or the frame of an animated GIF
an animation thread (the pace maker), it calls redraw() for the column which contains the animated GIFs and it also calls update()
and the viewer's content provider that controls the animation thread.
Details in my blog http://www.frightanic.com/2010/02/09/animated-gif-in-swt-tabletree-viewer-cell/.
Can't you let a LabelProvider return different images and then call viewer.update(...) on the elements you want to animate. You can use Display.timerExec to get a callback instead of having a separate thread.
See my answer here for how you can change colors. You should be able to do something similar with images.

Touches on transparent PNGs

I have a PNG in a UIImageView with alpha around the edges (let's say a circle). When I tap it, I want it to register as a tap for the circle if I'm touching the opaque bit, but a tap for the view behind if I touch the transparent bit.
(BTW: On another forum, someone said PNGs automatically do this, and a transparent PNG should pass the click on to the view below, but I've tested it and it doesn't, at least not in my case.)
Is there a flag I just haven't flipped, or do I need to create some kind of formula: "if tapped { get location; calculate distance from centre; if < r { touched circle } else { pass it on } }"?
-k.
I don't believe that PNGs automatically do this, but can't find any references that definitively say one way or the other.
Your radius calculation is probably simpler, but you could also manually check the alpha value of the touched pixel in your image to determine whether to count it as a hit. This code is targetted at OS X 10.5+, but with some minor modifications it should run on iPhone: Getting the pixel data from a CGImage object. Here is some related discussion on retrieving data from a UIImage: Getting data from an UIImage.
I figured it out...the PNG, bounding box transparency issue and being able to click through to another image behind:
var hitTestPoint1:Boolean = false;
var myHitTest1:Boolean = false;
var objects:Array;
clip.addEventListener(MouseEvent.MOUSE_DOWN, doHitTest);
clip.addEventListener(MouseEvent.MOUSE_UP, stopDragging);
clip.buttonMode = true;
clip.mouseEnabled = true;
clip.mouseChildren = true;
clip2.addEventListener(MouseEvent.MOUSE_DOWN, doHitTest);
clip2.addEventListener(MouseEvent.MOUSE_UP, stopDragging);
clip2.buttonMode = true;
clip2.mouseEnabled = true;
clip2.mouseChildren = true;
clip.rotation = 60;
function doHitTest(event:MouseEvent):void
{
objects = stage.getObjectsUnderPoint(new Point(event.stageX, event.stageY));
trace("Which one: " + event.target.name);
trace("What's under point: " + objects);
for(var i:int=0; i
function stopDragging(event:MouseEvent):void
{
event.target.stopDrag();
}
function realHitTest(object:DisplayObject, point:Point):Boolean
{
/* If we're already dealing with a BitmapData object then we just use the hitTest
* method of that BitmapData.
*/
if(object is BitmapData)
{
return (object as BitmapData).hitTest(new Point(0,0), 0, object.globalToLocal(point));
}
else {
/* First we check if the hitTestPoint method returns false. If it does, that
* means that we definitely do not have a hit, so we return false. But if this
* returns true, we still don't know 100% that we have a hit because it might
* be a transparent part of the image.
*/
if(!object.hitTestPoint(point.x, point.y, true))
{
return false;
}
else {
/* So now we make a new BitmapData object and draw the pixels of our object
* in there. Then we use the hitTest method of that BitmapData object to
* really find out of we have a hit or not.
*/
var bmapData:BitmapData = new BitmapData(object.width, object.height, true, 0x00000000);
bmapData.draw(object, new Matrix());
var returnVal:Boolean = bmapData.hitTest(new Point(0,0), 0, object.globalToLocal(point));
bmapData.dispose();
return returnVal;
}
}
}