Click here to see the image, Here is the situation
I am trying to use Debug.Log() to print the length of left arm and right arm in each frame (real time) when I am developing Kinect App. Then I found the numbers inside the red circle are keep increasing. What does it mean? Does it mean the frame number record after I click run App in Unity?
These number represents the repetition of that Log. You can see that you have pressed the Collapse tab in Console, So by that, all the same logs will shrink with the number of repetition.
Related
I have programmed a 90 second sequence in Google Earth Studio and I want to slow down the middle section only.
(I think there could be a long and dirty workaround by changing the overall time of the video in project settings and selecting scaling, then swaping in copies of the original start and finish, and link them all back together - but there must be an easier way to do this!?)
If there is a better forum for Google Earth Studio programming questions, please share in Comments.
I have found a better way to slow down a section of Google Earth Studio but it's still a bit clunky.
Increase the overall length of the animation in project setting, see above.
Highlight all the keyframes AFTER and including the end of the section to stretch (the spots on the keyframe editor, bottom half of screen).
Drag the selected keyframes to the right. This increases the time to get between the 2 keyframes (section to be stretched) automatically slowing the section down.
Tip 1: Selecting multiple keyframes (spots) - click and drag mouse but remember to select all keyframes including those out of sight. Scroll down to lower attributes.
Tip 2: Roll up (shrink) attributes before selecting multiple keyframes. This usually means that you can see everything without needing to scroll down. Click the small arrow next to the attribute (far left of keyframe editor).
Tip 3: You can always select keyframes manually, just hold down shift when you click on them to add to original selection.
Add to the comments if you know an easier way slow down a section of Google Earth Studio.
A bit of background: I recently implemented a Drag and Drop Behavior to my app, where I can drag items from e.g. the Finder inside my NSTableView. Now I wanted to write a few ui-tests for this new functionality.
The general idea was to move the finder window to the left side of the screen and my application window to the right side of the screen and then execute the drag and drop. The drag and drop itself is not the problem, the problem is the setup of the mentioned window layout. I cannot find a convenient way to resize and move the two windows. Coming from .net, I expected something like app.window.setSize(..) or app.window.moveTo(...).
What I tried so far:
As I have Magnet installed on my Mac, I tried the easy way out and sent key-events (control + option + arrow) to the window. This did not work, sending the keystrokes results in an error beep. Doing this manually during the tests works, so I don't know what exactly stops Magnet from rearranging the windows, but I guess it has something to do with the Testing Framework. I did not dig deeper into this, as it would have been a cheap solution anyway.
Drag the app window corners based on screen dimensions, e.g. for the window on the left I drag the corners to the top left, bottom left, top middle and bottom middle of the screen. This requires that all four corners are visible on screen, but that's a problem for another day. The solution would normally work, but the problem is that the y-coordinates I get from the frame of my app window are not what I was expecting. I do receive the location of the app window with app.windows.firstMatch.frame.origin. The x-coordinates look alright, but the y-coordinates are totally off (from what I expected).
I can't find many resources regarding the origin or frame members. Any idea on how to face this problem or where to find a documentation about the XCUITest-Framework and the basic concepts behind it? The official documentation doesn't help in this case. I only found this short explanation in the apple documentation archive about the coordinate system of macOS (or OS X back then) applications.
My team has made an application in Unity3D with MRTK for the Hololens 2. Our main menu inside the application does not use a Canvas, but includes Quads to display pictures and Text Mesh Pro's 3D text fields. I have found that, while this menu is open, several elements like the top-left corner picture and part of the text fields are jittery when you hold your head steady. When you nod your head, the affected parts of the text seem to lag behind so that they end up lower or higher than the text that remains steady.
The cutoff point between stable and unstable text is always the same. There is a central area that is stable. Text that is too high, or too far to the left or right in unstable. The division is in the middle of the letters (For example, the top-most part of the capital letter S is unstable, while the smaller letter m is stable.) It does not matter if the viewport is centered on the center or the side of the menu. Other objects in the menu, such as buttons, that are further outside the center, are still stable.
I'm aware that there can be problems with hologram stability, but I do not understand why only part of the same textfield are affected. I can't include screenshots or videos because the effect doesn't show up in screencaptures of the Hololens.
Does anyone know what could be causing part of an object to be unstable in the Hololens, and what might be done about it?
Edit: I made an edited screenshot to try and recreate the visual effect seen in the Hololens:
It seems to be related to depth reprojection. Text doesn't write to the depth buffer by default, which can lead to instability. MRTK have some tips, including specifically for TMPro:Depth buffer sharing in Unity
I have 120 photos like the one below showing the amount of fluorescent powder deposited onto a surface when it is touched by fingers. The photo is taken under UV light. You can see 5 finger prints and the reflection from the light source.
I'd like to know if there is an automated way of estimating the area of the fluorescent finger prints in batch mode. We have been using image J to manually select a particular print and estimate the area. Is it possible to automatically recognise the fingerprint in imageJ and measure it for all 5 prints on each of the 120 photos?
Note: Clearly you can see the print on the right is quite well defined but the one of the left is quite diffuse.
First, the data is useless without a scale, and the photos will be hard to process without a fixed set-up. I'd spend time to make a photo set up that minimizes glare and doesn't change scale, then try approaching the problem using the Threshold tool to find the prints, make selections using the resulting mask, then measuring the area. I'd then create a macro to batch process them.
Currently I am working on a Unity project. I started it on version 2018.3.6f1. The other day I installed version 2019.3.0a2 and this changed the screen size so now I have a white background at each side.
If I try going back to the size I had using the scale tool, it does not work so what do you recommend to change my screen size correctly or change it to its original one?
Here is a representation of my problem:
https://imgur.com/a/QKl9kVr
This is related to a setting you can change from the top bar of the Game window, it effectively sets the screen size emulation within the editor to different values, it looks like you want to have 'Free' mode, but you can also check behaviours in any other resolution