I am trying to build a color selection list in my personal project, with 48 * PressableButtonHoloLens2 + GridObjectCollection. When I run and hover with the simulated fingertip, the editor gives me these warning messages.
Q1: Is this because too many buttons are too close to each other? Or just the number of the buttons with collider is over 64? The message says 'Consider increasing the query buffer size in the pointer profile'
Q2: Where can I increase the buffer size? I don't see any 'Buffer size' field in the pointer profile.
Q3: Would it decrease performance? (increasing the buffer size)
Warning message
Maximum number of 64 colliders found in PokePointer overlap query.
Consider increasing the query buffer size in the pointer profile.
UnityEngine.Debug:LogWarning(Object)
Microsoft.MixedReality.Toolkit.Input.PokePointer:FindClosestTouchableForLayerMask(LayerMask,
BaseNearInteractionTouchable&, Single&, Vector3&) (at
Assets/MixedRealityToolkit.SDK/Features/UX/Scripts/Pointers/PokePointer.cs:169)
Microsoft.MixedReality.Toolkit.Input.PokePointer:OnPreSceneQuery() (at
Assets/MixedRealityToolkit.SDK/Features/UX/Scripts/Pointers/PokePointer.cs:127)
Microsoft.MixedReality.Toolkit.Input.FocusProvider:UpdatePointer(PointerData)
(at
Assets/MixedRealityToolkit.Services/InputSystem/FocusProvider.cs:878)
Microsoft.MixedReality.Toolkit.Input.FocusProvider:UpdatePointers()
(at
Assets/MixedRealityToolkit.Services/InputSystem/FocusProvider.cs:841)
Microsoft.MixedReality.Toolkit.Input.FocusProvider:Update() (at
Assets/MixedRealityToolkit.Services/InputSystem/FocusProvider.cs:518)
Microsoft.MixedReality.Toolkit.<>c:b__60_0(IMixedRealityService)
(at Assets/MixedRealityToolkit/Services/MixedRealityToolkit.cs:880)
Microsoft.MixedReality.Toolkit.MixedRealityToolkit:ExecuteOnAllServices(IEnumerable1,
Action1) (at
Assets/MixedRealityToolkit/Services/MixedRealityToolkit.cs:969)
Microsoft.MixedReality.Toolkit.MixedRealityToolkit:ExecuteOnAllServicesInOrder(Action`1)
(at Assets/MixedRealityToolkit/Services/MixedRealityToolkit.cs:950)
Microsoft.MixedReality.Toolkit.MixedRealityToolkit:UpdateAllServices()
(at Assets/MixedRealityToolkit/Services/MixedRealityToolkit.cs:880)
Microsoft.MixedReality.Toolkit.MixedRealityToolkit:Update() (at
Assets/MixedRealityToolkit/Services/MixedRealityToolkit.cs:580)
To reproduce
Create an empty game object
Put 48 x PressableButtonHoloLens2 prefabs under it
Assign GridObjectCollection to the parent
Update layout (cell width x height = 0.032)
Run and hover with simulated hand.
Expected behavior
No warning messages
Your Setup (please complete the following information)
Unity Version [e.g. 2018.4.6f1]
MRTK Version [e.g. v2.0.0]
https://github.com/microsoft/MixedRealityToolkit-Unity/issues/6052
Q1: Is this because too many buttons are too close to each other? Or just the number of the buttons with collider is over 64? The message says 'Consider increasing the query buffer size in the pointer profile'
It is because there are too many buttons close to each other.
Q2: Where can I increase the buffer size? I don't see any 'Buffer size' field in the pointer profile.
You can do this in the PokePointer prefab, in the PokePointer script, look for "Scene Query Buffer Size" field.
Q3: Would it decrease performance? (increasing the buffer size)
Yes I anticipate it would, though unclear how much relative to other components in the scene. Note that the poke pointer does run queries every frame, at least one per hand.
Related
I want to make the ground in this place of the terrain a bit deeper a bit lower but left shift key or right shift key and the left mouse button does not change anything on the terrain.
I've been looking for an answer for hours, and I finally found it.
If you are trying to lower a piece of land that is already at 0 height, you will not be able.
You can only lower areas that are above to height 0. In short, height 0 is the minimum: you cannot go to snow values due to the way the terrain is created.
To be able to make valleys then you have to select the tool "Set Height", enter a height and raise all the ground to that height with "Flatten All", or small pieces of land by dragging the mouse on the ground.
If you click "Flatten All", all the terrain will be brought to that height, losing all the changes made previously for the mountains.
However, if you have already made land and you don't want to lose it and redo it from scratch, you can use the following solution:
Export the current heightmap;
Open the image in an editor; (I'm not an expert in Photoshop or other
graphics programs, but I'm sure Photoshop has a way. If there isn't
an entry in the program, find out how to use scripts in photoshop)
Find a way to set the pixels according to a formula: maximum depth of
valleys + current value of the grayscale pixel * original height /
(original height + maximum depth of valleys);
Export the image in Raw format;
Go back to Unity and import it on the ground as you read in the link
above;
In the terrain settings set the height equal to the value you used
in the formula called "original height" + the “maximum depth of
valleys” value;
You should now have the ground as before, but higher. Set the value
in the transform of the Y position equal to "-maximum depth of
valleys" so that it is at the same height as before.
You should now be able to create valleys!
I haven't tried the process, but I've thought about it so much I'm pretty sure it will work.
Good work!
HOW THE Viewports look after loading the file:
I am having an issue caused by extremely large objects (in terms of physical size not polycount etc.) that I imported from a game using ninjaripper (a script used for extracting 3d models from games). When I open the file containing these large objects, the objects are only rendered in the left orthographic viewport. All other viewports/views do not show the geometry regardless of which rendering mode (wireframe, edges faces etc.) I have selected on said viewports. The objects are also not visible in perspective views. When I unhide all items apart from a single object (of normal size) I am able to see the object in all viewports including perspective viewports. When I unhide all again, the object which could previously be seen disappears. When switching to perspective view when these extremely large objects are present, the 'viewcube' disappears for an unknown reason. Zooming in or out in a perspective viewport also results in the viewcube disappearing. This is the only scene I've had so far which shows these issues, all my graphics drivers are up to date (specs listed below). The scene contains 3602 objects and has 1,957,286 polygons and 1,508,550 vertices.
This is the furthest I could zoom out in 3ds max:
Viewcube has disappeared on top right and bottom right viewport:
I tried removing all of the extremely large objects by hand, after which the remaining (normal sizes) objects could be seen in 2 of the viewports (top left and top right viewport did render correctly).
Viewports after having deleted all extremely large objects:
I tried resetting the scene, after which I merged the scene containing all 'normal sized objects' into an empty scene. This resulted in all viewports rendering the objects correctly. However, after saving the file and re-opening the saved file, 2 of the 4 viewports did not render the objects as was the case after just having deleted all but the 'normal sized' objects.
My question is: how should I deal with these extremely large imported objects in order to fix the viewport rendering issues they cause?
I wrote a simple bit of maxscript code to print out the maximum size of the biggest object in the scene, which resulted in a value of 2.6*10^38 [generic units], which, according to my calculation corresponds to a value of 6.6*10^36 [meters], in summary: extremely large. (I suspect the ninjaripper script or the script which imports the files produced by the ninjaripper into 3ds max had some sort of error causing some of the vertices to have extremely large position values). When I switch to the measure tap in 'utilities' and press Ctrl+A to select all objects in the scene (the scene containing all objects including the extremely large objects), 3ds max crashes due to the large object size (error message: "Application error- An error has occured and the application will now close. No Scene changes have occured since your last save.").
I could write some maxscript code which deletes all objects which are larger then a certain size (for example: 10^5 [meters]). However, as afore mentioned this for some reason does not fix the issue completely (after saving the scene with only 'normal sized' objects and re-opening the scene only 2 of the 4 viewports render the objects correctly. I ran the code for measuring the max size of the largest object in the scene again after having deleted all extremely large objects to check if I had indeed not skipped over one of these large objects, the result was a value of: 121.28 [generic units] (corresponding to object: "Mesh_3598") which is a relatively normal size, however 2 of my 4 viewports are not rendering my objects even after deleting the large objects (only when the left orthographic view is selected they can be seen in the 2 viewports that do not render part of the time).
Code for checking largest object (also prints out maximum size of this object):
global_max=0
largest_obj=undefined
for obj in geometry do(
obj_max_x = (obj.max.x-obj.min.x)
obj_max_y = (obj.max.y-obj.min.y)
obj_max_z = (obj.max.z-obj.min.z)
local_max = amax(#(obj_max_x, obj_max_y, obj_max_z ))
if local_max > global_max do
global_max = local_max ; largest_obj = obj
)
messagebox ("global max = " + global_max as string)
messagebox ("largest obj = " + largest_obj as string)
See the following links for the 3ds max scene files I have mentioned:
https://drive.google.com/open?id=1bAilmaHAXDr4WuD8gGS4piQfPzzJM9MH
Any suggestions/help will be greatly appreciated. Thank you very much!
System specs:
-Autodesk 3ds max 2018 x64
-Windows 10 PRO x64
-i5 6600k #3.5ghz
-msi z170a gaming m7 - socket 1151 - atx
-coolermaster g750m -750watt
-msi radeon r9-390x gaming -8gb
-noctua NH-D15
-kingston hyper-x fury black 16gb-pc-21300-dimm-4x4gb#2666mhz
As it turns out the extremely large objects were indeed causing the viewport rendering error. After removing all object's with a maximum size of 100000 [generic units] the viewport rendering errors were gone. I suspect the issue was caused by the objects not being in between the viewport's far and near planes due to the extremely large object sizes.
I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.
I'm trying to create GtkTreeViewColumn resizing functionality like how it is in Thunderbird. Key word here is "trying". So far I haven't succeeded.
The easiest way to understand how this resizing works is to fire up Thunderbird and play with resizing the columns yourself, but basically the way it works in Thunderbird is if you drag, say, column #1 to the right, this increases the width of column #1, at the same time decreasing the width of column #2 by the same amount. This happens until column #2 reaches its minimum width (for example, 0 pixels). At this point, continuing to drag column #1 to the right still increases the width of column #1, but since column #2 cannot be shrunken any further, then column #3 is shrunken until it reaches its minimum width. This continues until all of the columns to the right of column #1 are at their absolute minimum widths; at this point, column #1 can't have its width increased anymore, so continuing to drag the column to the right does nothing.
While the mouse button is still held down, if you were to start dragging column #1 to the left again (to shrink it), what would happen is what happened above, except in reverse order. As column #1 shrinks, the last column in the tree view grows until reaching its width at the time the original drag (when the mouse was first pressed down to start the drag) first started. At this point, the second to last column grows until reaching its width at the time of the original drag... and so on.
Then of course, when column #1 reaches its minimum width, column #0 shrinks until it reaches its minimum width. Since column #0 is the first column, then continuing to drag column #1 to the left won't shrink it anymore; in fact, nothing will happen.
One of the major benefits to handling dragging like this is: the columns will never be resized "out of bounds" and cause the GtkTreeView to grow in width or, if the GtkTreeView containing the GtkTreeViewColumns is packed into a scrolled window, cause horizontal scrollbars to appear. Having these scrollbars appear, or the tree view grow in width (and thus increase the width of the window) is super annoying for the user and makes things look a lot less clean. Which is, I assume, why Thunderbird handles it this way, as do other applications.
So, anyway, my problem is that I just can't figure out how to do this in GTK+. I'm wondering if it's even possible? If it is... how would it be done? I'm clueless here.
As far as I know, the only signal you can connect to to know if a GtkTreeViewColumn has been resized is the notify::width signal. Problem is, you can't return TRUE or FALSE from the signal handler function to tell GTK+ not to allow the resize to go through. It's just a notification signal. So that prevents me from, for example, detecting that all columns to the right of the one being dragged have reached their minimum widths and then telling GTK+ to stop the column from having its width increased anymore.
Another problem: if you call gtk_tree_view_column_set_fixed_width() -- what I am doing is calling gtk_tree_view_column_set_resizable(column, TRUE) and then calling gtk_tree_view_column_set_sizing(column, GTK_TREE_VIEW_COLUMN_FIXED) when creating the columns, by the way -- within the notify::width signal, it creates an infinite loop, which I don't know how to prevent, either.
Again, any help would be much appreciated.
The infinite loop can be broken by temporarily blocking signals, like this:
g_signal_handler_block(...);
/* update width */
g_isgnal_handler_unblock(...)
I got the same problem and my approach was to use 2 functions for the resize signal:
The first one checked for connected/disconnected of handler; if not connected connect it and save the handler ID.
The second just resized all columns proportionally and then disconnect the signal with the saved handler ID.
I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.