Could use of an UISlider for discrete values be a misuse and reason for rejection from app-store? - iphone

I'm using an UISlider for selecting different variants (fragrances to give you an idea) from a group of 12. The slider is about half a screen wide (on iPhone). Comparing to choosing a continuous value when precision is not so important, I imagine users may have difficulty with choosing a specific variant. I chose the slider because it is more uniform and also because the picker would take too much space. Could it be a reason of rejection of an app from apple store? (I haven't submitted yet.)

Almost certainly okay. Is the selection discrete or continuous? A set of discrete choices along a continuum could be a great application of a slider, with a couple caveats:
1) If the choices are really discrete, adjust the slide position after a drag to the nearest choice - like paging in a scroll view.
2) The blue fill on the left side represents an increasing quantity. Does your model have one? e.g. Let's say there are four beverage sizes on the menu, and the discrete positions represent a size the customer can order, smallest to largest. The blue bar here tells you how much beverage is going to be in the cup (even cooler if you rotate it 90 degrees to fill up towards y==0). But how about selecting a season: (spring summer fall winter)? There's certainly a discrete choice on a continuum, but what does the blue fill mean? Days of the year? Not really.
In the seasonal selection instance, I'd be tempted to write my own slider, just like Apple's but with no blue fill. Then again, once you've decided to custom build, you can be less influenced by the standard control.
Here's my anecdotal do and don't list for Apple approval (mostly don't): don't crash, don't call private apis, don't do demo + up-sell, don't copyright infringe, don't interfere with apple business objectives - like sell add-on content outside of the store, do something cool and simple.
But minor slider abuse isn't on my list. Good luck.

Related

SwiftUI tap on image to select an area - better way than this?

Updating an app I did for a car club that connects their customers (dealerships, parties, firehouse events, town events, tv commercials, magazine ads, etc) with their members to rent out fancy/classic/muscle cars for photo ops and eye-candy at events. The car owner gets paid, the car club takes a small percentage for club costs and events. It handles CRM stuff, scheduling, photos, etc.
The new feature they want is a way to quickly look over a car before and after an event, tap an area on the screen and describe any damages (plus other functions). They want to be able to look up stuff over time and do comparisons, etc., perhaps generate repair invoices, etc.
I have come up with a basic formula that works: image of the car is displayed, a transparent mask image above it in the z-order, masked with different colors. The user taps, I look up the color at the tap event, draw a circle on the image, use that color as an index into part or region list, record all the info, and bob's your uncle.
This just shortcuts having a bunch of drop downs or selectors or whatever to manually pick a part or region from a list, and give is some visual sugar.
Works, works nicely, and is consistently reliable (images are PNG - colors get munched up too much in JPG compression). This works great. It all falls apart if they decide they want to change images; they want me to retroactively draw circles on the new images based on old records' information. My firm line so far has been "no, you can't do that", because the tap locations are tied to the original images. They're insistent on trying, so...
I have two questions.
First is simple - am I missing something painfully obvious as a better way to do this? (select a known value for a section of a graphic)?
Second one is - loading stock images into the asset catalog, displaying them from 1x images, finding the scale value and adjusting tap locations, etc., all works great. At 2x and 3x, the scaling gets wonky. Loading from storage is the bigger issue... it seems when I load a pair image files from storage, turn it into a Data object, then shove that into a UIImage for display in a SwiftUI Image View, I lose the easy scaling from when I embed images in the assets catalog under the 1x slot. Is there a way to load the file->Data->UIImage->Image( uiImage: xxx ) and force a 1x rendering, skipping any auto-rendering/scaling that iOS might do?
Thoughts?
Below are quickly masked sample images I'm using to display the car and mask, each green area is slightly different in the RGB's G value, and I just use that as the lookup key for the part name in the description ("Front left fender", "Rocker panel", "Left rear wheel", "Windshield", and so forth).

Form field usability issues

At work we have a small external consultancy which don't appear to have much UX/usability experience. For example, their primary approach so far to responsive design has been to have a mobile breakpoint for font sizes (usually expressed in px, to boot) for heading tags, and nothing else. Even text scaling is a foreign concept to them.
We are going to release a new forms system, and they've submitted mockups of what they envision for the form look and feel. Besides the obviously faulty approaches of using placeholder text as labels, floating the label above the form when a user clicks in it, etc., their least poor mockup has each field with the label floated to appear above and inside the field boundary.
With this approach, padding is used to slide the actually enterable portion of the field down. The field boundary in this particular case is a non-gray color and with rounded corners as well. The net visual impact is of a visual bounded region with no visible field inside of it, and a label inside the top.
For dropdowns, there is however at least a visual cue that there's a field there: the down arrow. Still, where a user would expect to see field boundaries, there are none.
I'm a little concerned at this and not sure how to raise my concerns. A/B testing of this before a full release isn't possible currently, or I'd go there. Politically, my boss's boss loves these consultants, so it'd be dicey to simply express concerns without something to back them up.
I see a lot of studies and blogging about rounded vs. square corners, with studies showing that rounded corners can be more inviting and square corners draw more attention. But here, the concern I have over rounded corners is that, without any other visual cue that "here is a field", the rounded corners and the label inside and at the top directly communicates, "This is NOT a field but an empty region". Is there research or other support for this?

Measuring distance with iPhone camera

How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.

Measuring a room with an iPhone

I have a need to measure a room (if possible) from within an iPhone application, and I'm looking for some ideas on how I can achieve this. Extreme accuracy is not important, but accuracy down to say 1 foot would be good. Some ideas I've had so far are:
Walk around the room and measure using GPS. Unlikely to be anywhere near accurate enough, particularly for iPod touch users
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Anyone have any other ideas?
You could stand in one corner and throw the phone against the far corner. The phone could begin measurement at a certain point of acceleration and end measurement at deceleration
1) Set iPhone down on the floor starting at one wall with base against the wall.
2) Mark line where iPhone ends at top.
3) Pick iPhone up and move base to where the line is you just drew.
4) Repeat steps 1->3 until you reach the other wall.
5) Multiply number of lines it took to reach other wall by length of iPhone to reach final measurement.
=)
I remember seeing programs for realtors that involved holding a reference object up in a picture. The program would identify the reference object and other flat surfaces in the image and calculate dimensions from that. It was intended for measuring the exterior of houses. It could follow connected walls that it could assume were at right angles.
Instead of shipping with a reference object, as those programs did, you might be able to use a few common household objects like a piece of printer paper. Let the user pick from a list of common objects what flat item they are holding up to the wall.
Detecting the edges of walls, and of the reference object, is some tricky pattern recognition, followed by some tricky math to convert the found edges to planes. Still better than throwing you phone at the far wall though.
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Au contraire, mon frère.
This is the most user friendly, not to mention accurate, way of measuring the dimensions of a room.
PocketMeter measures the distance to one wall with an accuracy of half an inch.
If you use the same formulas to measure distance, but have the person stand near a corner of the room (so that the distances to the walls, floor, and ceiling are all different), you should be able to calculate all three measurements (length, width, and height) with one sonar pulse.
Edited, because of the comment, to add:
In an ideal world, you would get 6 pulses, one from each of the surfaces. However, we don't live in an ideal world. Here are some things you'll have to take into account:
The sound pulse causes the iPhone to vibrate. The iPhone microphone picks up this vibration.
The type of floor (carpet, wood, tile) will affect the time that the sound travels to the floor and back to the device.
The sound reflects of off more than one surface (wall) and returns to the iPhone.
If I had to guess, because I've done something similar in the past, you're going to have to emit a multi-frequency tone, made up of a low frequency, a medium frequency, and a high frequency. You'll have to perform a fast Fourier Transform on the sound wave you receive to pick out the frequencies that you transmitted.
Now, I don't want to discourage you. The calculations can be done. However, it's going to take some work. After all PocketMeter has been at it for a while, and they only measure the distance to one wall.
I think an easier way to do this would be to use the Pythagorean theorem. Most rooms are 8 or 10 feet tall and if the user can guess accurately, you can use the camera to do some analysis and crunch the numbers. (You might have to have some clever way to detect the angle)
How to do it
I expect 5 points off of your bottom line for this ;)
Let me see if it helps. Take an object of known length and keep it beside the wall and with Iphone, take pic of wall along with the object that you kept beside the wall. Now get the ratio of wall width and object width from the image in Iphone. And as you know the width of the object, you can easily calcualte the width of wall. repeat it for each wall and you will have a room measurement.
Your users could measure a known distance by pacing it off, and thereby calibrate the length of their pace. Then they could enter the distance of each wall in paces, and the phone would convert it to feet. This would probably be very convenient, and would probably be accurate to within 10%.
If they may need more accurate readings, then give them the option of entering in a measurement from a tape measure.
This answer is somewhat similar to Jitendra's answer, but the method he suggests will only work where you can fit the whole wall in a single shot.
Get an object of know size and photograph it held against the wall with the iphone held against the other wall (two people or blutac needed). Then you can calculate the distance between the walls by looking at the size of the object (in pixels) in the photo. You could use a PDF to make a printed document the object of known size and use a 2D barcode to get the iphone to pick it up.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.

Things to consider when writing for touch screen?

I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A few things to consider:
You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.