Can't manipulate lines by dragging in JavaFX scene builder - javafx-8

I am using Windows and I can't change the points of lines by dragging them in the editor with the mouse. The problem seems to only be happening on the Windows version of Scene Builder 2.0. On a MacBook this is not a problem.
What gives? Thanks.

Manipulating lines by dragging and dropping the endpoint doesn't work for me on either Windows or OS X.
Editing and shapes by dragging points is largely unsupported in SceneBuilder 2.0.
There is an existing tweak request in the JavaFX issue tracker:
Content panel should provide editing gestures for CubicCurve, QuadCurve, Polygon...
I commented on the request to note that even basic lines cannot be manipulated.
Quote from the developer (Eric Le Ponner) for the feature request:
Yes, there are no gestures for editing Shape currently. The only way to edit
shapes is by changing properties in the Inspector panel, which I agree
is far from being user friendly.
On the other hand, vector editing is a vast subject (some software are
specialized into that :). And Scene Builder will never become a vector
drawing tool. So the key point is to define what would be interesting
in the context of a tool like Scene Builder. Any specific request is
welcome.
Requests can be filed against the design tool project at:
https://javafx-jira.kenai.com

Related

How can I set size and location of the app window within a ui-test?

A bit of background: I recently implemented a Drag and Drop Behavior to my app, where I can drag items from e.g. the Finder inside my NSTableView. Now I wanted to write a few ui-tests for this new functionality.
The general idea was to move the finder window to the left side of the screen and my application window to the right side of the screen and then execute the drag and drop. The drag and drop itself is not the problem, the problem is the setup of the mentioned window layout. I cannot find a convenient way to resize and move the two windows. Coming from .net, I expected something like app.window.setSize(..) or app.window.moveTo(...).
What I tried so far:
As I have Magnet installed on my Mac, I tried the easy way out and sent key-events (control + option + arrow) to the window. This did not work, sending the keystrokes results in an error beep. Doing this manually during the tests works, so I don't know what exactly stops Magnet from rearranging the windows, but I guess it has something to do with the Testing Framework. I did not dig deeper into this, as it would have been a cheap solution anyway.
Drag the app window corners based on screen dimensions, e.g. for the window on the left I drag the corners to the top left, bottom left, top middle and bottom middle of the screen. This requires that all four corners are visible on screen, but that's a problem for another day. The solution would normally work, but the problem is that the y-coordinates I get from the frame of my app window are not what I was expecting. I do receive the location of the app window with app.windows.firstMatch.frame.origin. The x-coordinates look alright, but the y-coordinates are totally off (from what I expected).
I can't find many resources regarding the origin or frame members. Any idea on how to face this problem or where to find a documentation about the XCUITest-Framework and the basic concepts behind it? The official documentation doesn't help in this case. I only found this short explanation in the apple documentation archive about the coordinate system of macOS (or OS X back then) applications.

UI Hololens - HandDraggable Issues

I've recently created a 2D app for the HoloLens. It is a UI Panel with several buttons into it. In order to drag the panel and be positioned as the user wants, I implemented the HanDdraggable.cs functionality (from HoloToolKit). However, whenever I try to move the panel it also rotates.
To change that I modified the Rotation Mode from "Default" to "Orient Towards User" and "Orient Towards User and Keep Uptight". But then It works even worst; if I implement that case, whenever I try to select the panel and drag it to somewhere, the panel runs off from my field of view and it suddenly disappears.
I wanted to ask if somebody has already tried to implement the HandDraggable option into an UI Hololens app and knows how to fix this nodding issue.
I'm currently working on hololens UI for one of my projects and to manipulate UI I used TwoHandManipulatable script which is built into MixedRealityToolKit. In Manipulation Mode of that script you could only set "Move" as an option, and this would allow you to move a menu with two hands, as well as one. (I wanted to have a menu which you can also rotate and scale - which works perfectly with this script, you can lock around which axis you want to have rotation enabled, to avoid unwanted manipulation).
For your script HandDraggable, did you try to set RotationMode to Lock Object Rotation? Sounds like this could solve the problem.

How to implement Drag and Drop using GWT with touch gesture

Need to implement drag and drop using GWT with gesture :
Use case: like if we drag a panel with touch only its shadow image (or some grey box) start to drag from there with original panel being at its original location and we should be able to drop that shadow(or pseudo box) at desired droppable pane and some pop up should come up.
My advice is to not use any library for this.
Since browsers change, sometimes these libraries get broken. When you are using these libraries all that happens inside is some black magic and you will have a hard time to fix.
That is why I always implement DND myself. Also basic DND is really very easy to do and only a handfull lines of code. A library is overkill in a lot of situations.
Anyway. Over at G-Widgets a nice guide of how to do DND yourself is available: http://www.g-widgets.com/2015/12/24/drag-and-drop-using-gwt/

How to access Face Manipulation Mode?

I am fairly new to Blender and I am trying to join objects together on blender for a simulation. I have researched for an answer, and have found one source which seemed to work best with what I was trying to do. I have been using the answer given on this question. I have switched to object mode, selected the objects, and pressed Ctrl+J to join the objects. I am then supposed to enter Edit Mode, and then Face Manipulation Mode. I do not know how to access Face Manipulation Mode, or Vertex Manipulation Mode, and cannot find any online resource to show me how to access it. Does someone know what hot keys I can press/ tabs I can open to get to this?
Use the tab key to switch between object mode and edit mode.
"Face manipulation" mode is not really a thing, just select a face (RMB while in edit mode) and manipulate it just like anything else. Make sure that the face selection is enabled (three little buttons on the horizontal bar below the 3d view let you modify the selection possibilities to vertex, edges, and/or faces. (They look like icons with selected those-things on them, respectively)

How to develop transparent interface?

This is not completely a programming question. Today when I load LiLi USB creator software I saw the interfaces are transparent. Is that because they are Photoshopped or is transparency a technique of IDE? I'm using both NetbBeans and Eclipse. To be clear, I'm adding a photo too.
For transparency of the entire GUI, including controls (which doesn't seem to be the case in your screenshot), .NET includes the Form.Opacity property.
Additionally, it may be possible to use LayeredWindows to change only the opacity of the top-level component to produce the desired effect.
For Java specifically, there's an official Java tutorial for that, although it seems to set the opacity of the entire window, including all child components.