How to access Face Manipulation Mode? - simulation

I am fairly new to Blender and I am trying to join objects together on blender for a simulation. I have researched for an answer, and have found one source which seemed to work best with what I was trying to do. I have been using the answer given on this question. I have switched to object mode, selected the objects, and pressed Ctrl+J to join the objects. I am then supposed to enter Edit Mode, and then Face Manipulation Mode. I do not know how to access Face Manipulation Mode, or Vertex Manipulation Mode, and cannot find any online resource to show me how to access it. Does someone know what hot keys I can press/ tabs I can open to get to this?

Use the tab key to switch between object mode and edit mode.
"Face manipulation" mode is not really a thing, just select a face (RMB while in edit mode) and manipulate it just like anything else. Make sure that the face selection is enabled (three little buttons on the horizontal bar below the 3d view let you modify the selection possibilities to vertex, edges, and/or faces. (They look like icons with selected those-things on them, respectively)

Related

Why does my game studder based on what I have selected in the hierarchy in unity

I have no idea why this happens and I was wondering if there was a reason or maybe even a fix.
Vid link to what I'm talking about: https://youtu.be/HwxqL95lzXU
The reason it occurs is that the editor has to update all changes made on whatever you have selected. On top of that, Unity will attempt to draw the gizmos related to the object you have selected and all child objects. Depending on the object you have selected, the number of updates that need to redraw the editor and/or the number of gizmos needed to be drawn can take up a lot of CPU, which causes lag.
The one solution I can think of is to click on Gizmos in the top right of the editor window, and uncheck the box Selection Outline. If there is not a noticeable enough difference, disabling all gizmos can also help.
Outside of these small changes, I am unsure if there is a way to heavily reduce CPU usage in editor when selecting objects in editor. I tend to never have objects selected while playing unless I need to track various data. Even if I do need to track data, it is not the entire gameObject, so I will selectively print data I need or track it with a custom editor tool.

How can I set size and location of the app window within a ui-test?

A bit of background: I recently implemented a Drag and Drop Behavior to my app, where I can drag items from e.g. the Finder inside my NSTableView. Now I wanted to write a few ui-tests for this new functionality.
The general idea was to move the finder window to the left side of the screen and my application window to the right side of the screen and then execute the drag and drop. The drag and drop itself is not the problem, the problem is the setup of the mentioned window layout. I cannot find a convenient way to resize and move the two windows. Coming from .net, I expected something like app.window.setSize(..) or app.window.moveTo(...).
What I tried so far:
As I have Magnet installed on my Mac, I tried the easy way out and sent key-events (control + option + arrow) to the window. This did not work, sending the keystrokes results in an error beep. Doing this manually during the tests works, so I don't know what exactly stops Magnet from rearranging the windows, but I guess it has something to do with the Testing Framework. I did not dig deeper into this, as it would have been a cheap solution anyway.
Drag the app window corners based on screen dimensions, e.g. for the window on the left I drag the corners to the top left, bottom left, top middle and bottom middle of the screen. This requires that all four corners are visible on screen, but that's a problem for another day. The solution would normally work, but the problem is that the y-coordinates I get from the frame of my app window are not what I was expecting. I do receive the location of the app window with app.windows.firstMatch.frame.origin. The x-coordinates look alright, but the y-coordinates are totally off (from what I expected).
I can't find many resources regarding the origin or frame members. Any idea on how to face this problem or where to find a documentation about the XCUITest-Framework and the basic concepts behind it? The official documentation doesn't help in this case. I only found this short explanation in the apple documentation archive about the coordinate system of macOS (or OS X back then) applications.

UI Hololens - HandDraggable Issues

I've recently created a 2D app for the HoloLens. It is a UI Panel with several buttons into it. In order to drag the panel and be positioned as the user wants, I implemented the HanDdraggable.cs functionality (from HoloToolKit). However, whenever I try to move the panel it also rotates.
To change that I modified the Rotation Mode from "Default" to "Orient Towards User" and "Orient Towards User and Keep Uptight". But then It works even worst; if I implement that case, whenever I try to select the panel and drag it to somewhere, the panel runs off from my field of view and it suddenly disappears.
I wanted to ask if somebody has already tried to implement the HandDraggable option into an UI Hololens app and knows how to fix this nodding issue.
I'm currently working on hololens UI for one of my projects and to manipulate UI I used TwoHandManipulatable script which is built into MixedRealityToolKit. In Manipulation Mode of that script you could only set "Move" as an option, and this would allow you to move a menu with two hands, as well as one. (I wanted to have a menu which you can also rotate and scale - which works perfectly with this script, you can lock around which axis you want to have rotation enabled, to avoid unwanted manipulation).
For your script HandDraggable, did you try to set RotationMode to Lock Object Rotation? Sounds like this could solve the problem.

Hololens: how to render element visible only in AR, but not in mixed reality capture

I'm making a presentation of someone using the Hololens that is duplicated on a big screen. For duplication it uses the device portal's mixed reality capture option (live stream).
I need to render a tool tip to be visible only to the person with the Hololens - but invisible to the people watching it on the big screen.
From what I've seen, the only difference in rendering between the two is that I can render black on the live stream (if I omit rendering the alpha channel) with it being invisible on the Hololens due to the way it's screen works. This is unfortunately useless to me, as I need to show something to the Hololens viewer, not big screen viewers.
Any ideas on how can I make part of the content visible only to the hololens user?
I can't use spectator view due to other constraints (I need first person view).
Found a solution, not the best one possible, but usable.
I render the tooltip objects only to the right eye, as only the contents of the left eye are included in the live view.
For anyone wondering, in a shader, there is a magic value of unity_StereoEyeIndex that has the value 1 or 0, depending on the eye. To use this value, it first needs to be set up.
If anoyone has an idea how can I do this without sacrificing stereoscopy, I'll be happy to hear about it.

GTK prevent custom widget from grabbing focus

I've implemented a musical keyboard as a subclass of Fixed and where each individual key is a subclass of DrawingArea, and so far, it works great: custom drawing code in expose, press+release functionality working... kind of. See, here's the problem: I want the user to be able to drag the mouse across the keyboard with the mouse down to play it. I currently capture the button press and release signals, as well as enter and leave notify. Unfortunately, this doesn't quite work because the widget seems to grab focus of the mouse as soon as the mouse is pressed over it. This makes sense for normal buttons, but not for a musical keyboard. Is there any good way to remedy this other than rewriting the entire keyboard to be one massive DrawingArea?
Also, it shouldn't matter, but in case it does I'm using GTK#.
You might consider using GooCanvas: You can represent each of the keys as CanvasPolylines, fill them with the colors you need. Each of the Canvas items is a GtkWidget, so you can act on events like enter, leave, button-pressed etc.
This method seems to make more sense (to me) than separate DrawingAreas. As each drawn element is still accessible, you can even change colors/size and other properties dynamically. Also, Polyline lets you make more complex shapes.