how to get x and y coordinates of user tap/touch relative to window - android titanium mobile? - popup

Can any one tell how to get x and y coordinates of user tap/touch relative to window - android titanium mobile??
I need to add a popup view close to tableviewrow when the user select a row. How to determine the x,y (top/left) positions?

This might help you as the window or object touch events can help you find the exact points the user have touched on screen.
thanks

you can use
row.addEventListener('click',function(e){
alert('left:'+ e.x + ' top:'+ e.y);
});
where row is an object of Titanium.UI.TableViewRow. For other objects (like window/view), you can use same eventlistener.
But remember one thing: it gives the co-ordinates with respect to the corresponding row. i.e. you may get same co-ordinates for all the rows.
Therefor, if you want to popup something on that co-ordinate, you should addEventListener to the main window.

On click event of any view or object you get x and y coordinate see this link for more reference:
Titanium.UI.Window

Related

enable context menu for specific cell or item in uitable or uilistbox in matlab

I created a uitable (new version using appdesigner) in MATLAB and wanted to support right clicking on cells and showing a cell specific context menu. Much to my surprise there seemed to be no way to support this.
The context menu only seems to trigger with right click on the uitable, but there is no way of knowing which cell was selected (I think, maybe not?). I created a workaround where I left clicked to select a cell, and during that selection I right clicked using a Java Mouse robot to trigger the context menu. This is super ugly but sort of works. Except, if you need to bring up the menu twice on the same cell. Apparently the cell selected callback only fires once for the cell, until a new cell is selected. I tried literally putting two tables in the same spot and upon selecting one toggling to the other, but the memory of cell selection is table specific, so this only worked for two clicks before both tables had been clicked on the same cell, and toggling visibility back to the first resulted in the cell selection callback not firing (since the cell had not changed) . I tried various approaches to try and deselect the cell (disable/enable, visibility change, data change, etc.), but the cell selection callback never changed.
I even tried having duplicate columns, where the goal was to hide a column, where normally columns 1 and 2 would be visible (column 3 out of view due to size), and then on clicking on column 2, column 2 would hide itself (0 width) and column 3 (an exact duplicate) would move into its place, thus seeming to the user like multi-clicking was supported. Unfortunately I can't set the column width to 0 -- or rather, setting it to 0 doesn't completely hide the column. Instead there seems to be some minimal width to the column and the whole thing looked awful.
I wanted to do something similar with a listbox (right click support), but again I couldn't figure out how to identify where I was right clicking. I eventually settled on left clicking on a listbox and using the mouse robot approach to right click to bring up the context menu. Unlike the uitable, it was fairly easy to clear the selection on the listbox (set listbox.Value = {}). However, I strongly dislike the left click instead of right click approach and I'd rather have multiple columns.
Any suggestions would be much appreciated!!!
So I found an approach that is better than using a robot. I had tried this but was missing a critical portion which I will describe below.
Upon selecting a row in the table, the open command can be used to launch a context menu. My problem was that I didn't know where to launch the menu. I tried CurrentPoint for the figure, but it was 0,0 (or in general not valid)
Here's the current documentation for CurrentPoint:
Current point, returned as a two-element vector. The vector contains
the (x, y) coordinates of the mouse pointer, measured from the
lower-left corner of the figure. The values are in units specified by
the Units property.
The coordinates update when you do any of the following:
Press the mouse button within the figure.
Release the mouse button after pressing it within the figure.
Press the mouse button within the figure, and then release it outside
the figure.
Rotate the scroll wheel within the figure.
Move the mouse within the figure (without pressing any buttons),
provided that the WindowButtonMotionFcn property is not empty.
If the figure has a callback that responds to mouse interactions, and
you trigger that callback faster than the system can execute the code,
the coordinates might not reflect the actual location of the pointer.
Instead, they are the location when the callback began execution.
If you use the CurrentPoint property to plot points, the coordinate
values might contain rounding error.
Here's the critical line again:
"Move the mouse within the figure (without pressing any buttons), provided that the WindowButtonMotionFcn property is not empty."
So when a selection of a cell happens, the CurrentPoint is not valid. However, if we simply define a WindowButtonMotionFcn, then it is!
So the general idea is to have a callback for the table when a cell is selected (SelectionChangedFcn) and to set a dummy callback for WindowButtonMotionFcn
The final point is that a context menu can be launched with the open function if you specify a given location to launch it at. This is different from attaching it to an object and having it automatically launch on right click.
Here's some example code. If you comment out the callback for windows motion then the whole thing doesn't work! Unfortunately it is a left click for targeting the cell but at least it avoids the non-sense I was using with a java robot right click.
classdef wtf < handle
properties
h %struct, this was an appdesigner handle
cm %context menu
end
methods
function obj = wtf()
h = struct;
h.UIFigure = uifigure();
h.UITable = uitable(h.UIFigure);
obj.h = h;
obj.h.UITable.CellSelectionCallback = #obj.tableCall;
%obj.h.UITable.SelectionChangedFcn = #obj.tableCall;
%Some data ...
s = struct;
s.a = (1:4)';
s.b = (5:8)';
obj.h.UITable.Data = struct2table(s);
%Our context menu
cm = uicontextmenu(obj.h.UIFigure);
m = uimenu(cm,'Text','Menu1');
obj.cm = cm;
%WTF ... without this we don't get a valid CurrentPoint
obj.h.UIFigure.WindowButtonMotionFcn = #obj.mouseMove;
end
function tableCall(obj,x,y)
%y - event info
%x - impacted object
cp = get (obj.h.UIFigure, 'CurrentPoint');
open(obj.cm,cp(1),cp(2));
selected_cell = y.Indices;
%selected_cell = y.Selection;
x.Selection = []; %allows reselecting same cell without
%needing to select another cell first
%Now we can run something on the context menu
%that targets the selected cell
end
function mouseMove(obj,x,y)
%we could store a point here
end
end
end

How to ask appium to drag an element?

I am taking the first steps in mobile automation.
I have a view such as below:
As you can see, I have a android.view.ViewGroup which I should drag an element over it.
My question is:
How can drag the button and take it to the right?
You can use the touchAction method :
Just provide the x , y coordinates upto to which you want to drag
new TouchAction(driver)
.press(PointOption.point(256, 1115))
.waitAction(WaitOptions.waitOptions(Duration.ofMillis(2000)))
.perform()
.moveTo(PointOption.point(256, 600))
.release()
.perform();
or if you wanna do it more precisely then you can get the device's x ,y width and drag it accordingly.

Interactive bar chart.user changes y axis value at runtime by dragging it

For example 3 columns are there with y axis value as 100,200,300 in barcharts.
User select one column alone and drag it to 500 value in y-axis.
How to achieve this?
Is it available in library like androidplot or aChartPlugin?
If not which plugin support this requirement.Our project is for Android tablets
Please provide me sample code for this requirement.Thanks in advance.
To my knowledge no library exists for Android that provides 'out of the box' capabilities to drag and reorganize data. Having said that, any library that provides you with a way to correlate touch events to data elements and also supports dynamic updates should be suitable.
If you have specific requirements about how the "drag" is implemented then you may end up having to roll your own library or customize an existing one to your needs. If not, here's a basic workflow you could implement with Androidplot that represents the drag operation as a cursor:
1 - Detect selections using an OnTouchListener. Here's an example of a bar plot that allows bar selection via touch. Gives a full example of converting screen coords to model elements etc. Create and add an instance of XValueMarker denoting the current position of the selection. (An XValueMarker is basically a customizable vertical line marking an x-val.
2 - Detect "drag" events using an OnTouchListener. Here's an example that detects zooming and scrolling. It's not the same thing exactly but the scrolling logic is close enough to give you the general idea.
3 - As the user drags, update the position of the XValueMarker by XValueMarker.setValue(Number).
4 - Once the "drag" ends, remove the XValueMarker and modify the underlying XYSeries to reflect the change using basic data structure manipulation(s).
And of course remember to always call plot.redraw() after each operation that is expected to alter the appearance of the plot in some way.

tkinter listbox drag and drop

I'm trying to make a listbox with the ability to do drag and drop onto a canvas. I've done drag and drop before, but it was only between canvas.create_text loosely based on the code for a this checkers program I found here. I've seen a couple of questions about drag and drop listboxes but they only deal with changing the order of elements in the listbox. What I'm dealing with is a listbox which has a list of names, and a canvas with some create_text objects on the canvas, and I want to be able to drag a name from the listbox onto the canvas. If figure I'd need to make a Listbox subclass, but I'm unsure of where to go from there.
So I've got a DialogWindow (subclass of Toplevel), and have my canvas and listbox in the DialogWindow. I've conjured up a way of getting a name from the listbox: when I click on a name, I convert it to a canvas.create_text object and then drag that. My issue is the drop. I try to use the canvas.canvasx to convert to canvas coordinates but it hasn't worked for me. x and y are still in listbox coordinates.
def onRelease(self, event):
x = self.canvas.canvasx(event.x)
y = self.canvas.canvasx(event.y)
print(event.x, event.y)
print(x, y) #Prints the same thing as the previous line
The key to drag and drop boils down to needing to do three things:
bind on <ButtonPress-1> to select the item to be dragged
bind on <B1-Motion> to do the drag
bind on <ButtonRelease-1> to do the drop
None of this requires any subclassing. All of these bindings are on the listbox widget. You'll probably want to create an instance of Toplevel with a text label in it and the window decorations removed (using wm_overrideredirect(True)) to represent the item being dragged.
On the drop, you'll need to convert the coordinates of the mouse to canvas coordinates using the canvasx and canvasy methods of the canvas. Instead of starting with event.x and event.y (which are relative to the listbox), use the winfo_pointerxy method to get the screen coordinates of the mouse, then do a little math.
Here's an example of how you could do the drop:
def _on_drag_drop(self, event):
i = self.listbox.curselection()[0]
text = self.listbox.get(i)
wx, wy = self.canvas.winfo_rootx(), self.canvas.winfo_rooty()
x,y = self.winfo_pointerxy()
cx = self.canvas.canvasx(x-wx)
cy = self.canvas.canvasy(y-wy)
self.canvas.create_text(cx,cy,text=text, anchor="sw")
self.canvas.configure(scrollregion=self.canvas.bbox("all"))

Trying to position a button programmatically, however not in right position according to X,Y

I'm trying to position a button.
Here is the code for the positioning..
[btnAbs setFrame:CGRectMake(57, 50, 106, 99)];
The coordinates I got are from here:
As you can see the xib stats the x & y to be at 57 and 192, which is where I want the button to be.
However when I run it in simulator, here is where its placed:
Obviously i could keep guessing and guessing the x and y coordinates, but this is very time consuming. So how come it's doing this?
Please join the links together when looking at the pics as i need more than 10 reps to post images, or a mod fix this please?
The problem is here:
The “origin” in Interface Builder doesn’t actually affect how the view gets positioned programmatically—it’s just a visual aid. If you click the dot in the top left of that box, the X and Y coordinates will change to the top-left of the view, which are the coordinates you want to pass to -setFrame:.
It looks to me as if you have the GUI designer aligning base upon the center center of your image view. When you do it in code, it is going to align based upon the top left of the image view.
Further, your code places it at a y of 50, where your GUI designer is showing a y coord. of 192.