Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am porting a web app to a mobile app using phonegap. In it there is a window control that is resizable similar to windows in Windows, i.e. clicking the edge and dragging.
My initial thought is to simply translate touch events to mouse events, and when doing this I notice that the edges, i.e. the touchable area need be rather big in order to be able to hit it with a touch.
I am wondering how big should an area be in order to conveniently be able to touch it? My second solution is to build the resize using pinch zoom or something else.
48dp.
http://developer.android.com/design/style/metrics-grids.html
Why 48dp?
On average, 48dp translate to a physical size of about 9mm (with some variability). This is comfortably in the range of recommended target sizes (7-10 mm) for touchscreen objects and users will be able to reliably and accurately target them with their fingers.
If you design your elements to be at least 48dp high and wide you can guarantee that:
your targets will never be smaller than the minimum recommended target size of 7mm regardless of what screen they are displayed on.
you strike a good compromise between overall information density on the one hand, and targetability of UI elements on the other.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 months ago.
Improve this question
In my Unity defense game, each enemy has their HP bar, rendered using a UI slider and a billboard system as such.
My concern is that there may be too many canvas elements in the scene. Is there an alternative option to this, or is my solution effective enough concerning performance??
Assuming you're not going to have hundreds of enemies on the screen concurrently, you should be fine in terms of performance. You could always introduce an ObjectPool for the on-screen UI components if you see performance dropping.
I would recommend setting the scale of the Canvas RectTransform to something like 0.01 so it occupies less space on the screen. Then re-adjusting the sizing of your existing UI elements to match this scale.
One Canvas
Suppose you're looking for an alternative solution for creating world-space UI health bars in Unity. In that case, you could use a single canvas and dynamically change the position of the UI elements within it based on the position of the game objects they are supposed to be attached to.
One way you could achieve this by using the ScreenPointToLocalPointInRectangle method of RectTransformUtility and converting the world position of the game object to a local position within the canvas. Then, you can use the rectTransform.anchoredPosition property to set the position of the UI element within the canvas.
However, there are also drawbacks to using single canvases to display multiple UI elements attached to different game objects. It can cause the canvas to become "dirty," meaning that every frame needs to be redrawn.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
Whenever I make 2D games, I always attach the sprites under a canvas so that I can set the canvas scaling to be "Scale With Screen Size" so that the sprites scale with the users aspect ratio. Is this bad practice and is there a better way of doing this?
Unity shouldn't render objects off-screen, so in realtime there shouldn't be any problems with graphics performance.
From personal experience, making a game responsive for different resolutions is complicated and many techniques can be used to get good results.
I also use the "Scale with screen size" setting, and over the years I haven't found anything that works better.
The only detail I can give you for general performance: if you have many elements, perhaps animated, and they are not seen at all by the camera, I recommend that you disable them from scripts, because graphically they should not give problems, but they are always things that the engine calculates frame by frame and therefore if they are not essential it is better to disable them.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I'm adding an image but it won't go fullscreen with the iPhone XR, it has white space around it
I tried zooming it in and out for refreshing it, but still won't go fullscreen
I expect the image to go fullscreen
It sounds like you've aligned your image to the Safe Area guide rather than the view edge.
Safe Area guides are inset a bit on iPhone X/XR devices to allow space for the home indicator or the sensor housing (notch). If you want your view to fill the superview, make sure your layout constraints are attached to the superview edge rather than the Safe Area edge.
To fix this, edit your existing constraint from the storyboard, changing the item from Safe Area to Superview:
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Trying to figure out how to incorporate this effect into my app.
It is essentially the adjustment of the colors in the speech bubbles when the user scrolls, as seen in the messages app in iOS 7.
The bubbles close to the top of the screen are light blue, the bubbles towards the bottom are darker.
Create a gradient on your views. Set the start and end color's intensity according to the view's position in the scroll view's superview (using convertRect:toView:). As your scrollview scrolls, update the visible bubbles' background colors according to their current position. An optimization is to only update the views which are visible. Using a table or collection view can help you with that. It's a simple yet effective effect.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm implementing some piece of code in MATLAB to track left ventricle wall position in echocardiography images using a contour-based method. Unfortunately in some frames the contour evolves more than it is expected and in some regions the wall does not have good contrast.
Does anyone know a way to restrict contours from unexpected evolution from frame to frame saving both old frame's position and new one's shape?
Thank you all for helping.
Image segmentation is a hard problem. There is no general approach that works well in every situation. How are your contours being defined? Are you doing threshold-based segmentation, or using another approach? Have you tried transforming into a polar coordinate system in the centre of the LV? Have you tried quantifying some sort of 'least-squares' cost associated with moving the contour?
All I can suggest is look at how people solve similar problems. In my field (namely MRI), the best we have a) isn't really all that good, and b) is probably this open-source Matlab 'program' designed for cardiac segmentation called segment (see http://medviso.com/products/segment/features/cmr/). I suggest you look at how they do it, and see if you can adapt the method to work with the (much noisier, much harder to interpret) echo images.
Sorry I can't be more helpful!