Updating an app I did for a car club that connects their customers (dealerships, parties, firehouse events, town events, tv commercials, magazine ads, etc) with their members to rent out fancy/classic/muscle cars for photo ops and eye-candy at events. The car owner gets paid, the car club takes a small percentage for club costs and events. It handles CRM stuff, scheduling, photos, etc.
The new feature they want is a way to quickly look over a car before and after an event, tap an area on the screen and describe any damages (plus other functions). They want to be able to look up stuff over time and do comparisons, etc., perhaps generate repair invoices, etc.
I have come up with a basic formula that works: image of the car is displayed, a transparent mask image above it in the z-order, masked with different colors. The user taps, I look up the color at the tap event, draw a circle on the image, use that color as an index into part or region list, record all the info, and bob's your uncle.
This just shortcuts having a bunch of drop downs or selectors or whatever to manually pick a part or region from a list, and give is some visual sugar.
Works, works nicely, and is consistently reliable (images are PNG - colors get munched up too much in JPG compression). This works great. It all falls apart if they decide they want to change images; they want me to retroactively draw circles on the new images based on old records' information. My firm line so far has been "no, you can't do that", because the tap locations are tied to the original images. They're insistent on trying, so...
I have two questions.
First is simple - am I missing something painfully obvious as a better way to do this? (select a known value for a section of a graphic)?
Second one is - loading stock images into the asset catalog, displaying them from 1x images, finding the scale value and adjusting tap locations, etc., all works great. At 2x and 3x, the scaling gets wonky. Loading from storage is the bigger issue... it seems when I load a pair image files from storage, turn it into a Data object, then shove that into a UIImage for display in a SwiftUI Image View, I lose the easy scaling from when I embed images in the assets catalog under the 1x slot. Is there a way to load the file->Data->UIImage->Image( uiImage: xxx ) and force a 1x rendering, skipping any auto-rendering/scaling that iOS might do?
Thoughts?
Below are quickly masked sample images I'm using to display the car and mask, each green area is slightly different in the RGB's G value, and I just use that as the lookup key for the part name in the description ("Front left fender", "Rocker panel", "Left rear wheel", "Windshield", and so forth).
Related
For research purposes, I would like to create a Unity VR 3D application that (more or less) simulates the foveal field of view of a person. This means, in particular, I would like to render the whole environment of the application in the full field of view, but certain objects of interest I only want to render in the foveal area.
For the purpose of explaining the problem, I created a simple 2D picture. Please assume it's 3D. In the picture, the green area is the peripheral field of view, and the yellow area is the foveal field of view. The whole environment, like walls, sky, etc., should get rendered in the green and in the yellow area. Particular objects of interests, here the flowers, however, should only get rendered only in the yellow area and - importand - these objects should get cut off when reaching the green area. With this approach, I want to force people moving their head instead of just moving the eyes.
Any idea how to achieve this? Is it possible to use a kind of mask or filter? Or do I need a stencil shader? I looked around but could not find the correct approach.
I have developed scratch card effect.I am stuck at logic of how can I know object got visible which is behind the scratch card image? So that I can show reward screen.
PS: with modifications in this link I able to work this scratch card effect in uGUI.
There are many ways you could go about this. Assuming you know the dimensions of the red "target image" that the user is trying to uncover, you could take a fixed number of samples from the area that the target is under. Once, say, 80% of those samples are transparent (i.e. the target is visible at those positions), you can consider the object visible and show the reward screen.
You can use GetPixel to get the individual samples from the scratch texture.
I need to segment an image in ios for a fashion app by keeping only the foreground image and removing all other background part of the image which should resemble like a tool for removing the background of images in various photo editing tools please help me.
General background subtraction is an unsolved problem, so getting perfect results is going to be a big effort. With that said, you can probably get close. Here are a couple of suggested avenues:
I am guessing that your app will place clothes on a human, or something of the sort. Instead of getting a perfect segmentation, run a person detector, remove all of the image except for the detected person, and fit a part-based human model to the remaining image. Then you have the pose of the person, and can do your image processing accordingly.
Allow the user to input some strokes from the foreground and some strokes from the background, and run a graph-cuts-based image segmentation algorithm on the frame.
Begin your process by having the user not be present in your video stream. From this, learn the background distribution (start with a simple histogram of background pixels, there are much more elaborate schemes but you need a starting place). Then, when the user enters the scene, create a binary image containing the connected components that don't fit into the learned background distribution. This will not be perfect, but you will start to see something close to a binary image where the white pixels are your user, and the black pixels are the background. Use morphology operators to join any large connected components that are slightly separated, and threshold your image to remove small noise in the image, from things like specular objects and illumination changes.
Like I said (and is mentioned in the comments), this is not an easy problem, but you can come up with a good approximation if you put some time into it. I suggest the third method I listed. It is achievable, and can be broken down into small parts so you can tell when you're making progress.
Good luck!
I am working on an app in which I want similar kind of functionality as that of WebMD body image.
How can I identify which part of image is touched in an optimal way? Do I have to slice the image according to requirements?
How can I add some tags into the image? Similar to the facebook photo upload functionality in iphone.
You need some way to figure out what the user touched, or tried to touch.
You might use a list of annotation-like objects, where each object has a location. When the user touches the image, you'll need to find the annotation in the list that's closest to the touch location and react appropriately. The "optimal" way to do that is probably to use a quad tree. For an iPhone app, though, the number of touchable points is probably pretty small (several dozen?), and a brute force search through the list will probably be more than fast enough.
Another option would be to overlay a transparent view on top of your image for each region that you want the user to be able to touch. Doing this would also make it simple to draw a "tag" at each of those locations.
Is there a way to convert an image on the fly to "Red on Black" for accessibility? I have pictures that I want to stream to the iphone. Viewing them at night, Red on Black is better for viewing.
Answer:
You're much better off making your own night friendly images, and swapping those out along with text color, etc.
I'm not sure how you have your current images implemented, but before they load you could check for BOOL isNightTime, and if it returns TRUE, then load the nightTime images instead. I would suggest taking your current image set, and duplicating it with the prefix nt_.
Bonus:
You can take this a step further. Grab the GPS location, then use the location to get weather information from Wunderground. Part of their report includes the times of Sunrise and Sunset. You could then use those values and check them against the current time (be careful that all the time zones are playing nice), and from the result of that, enable the NightTime image set.
If you do implement this, make sure that the user can still enable or disable it to his/her preference.
I had originally said NOAA, but I can't find where that information is on their website. I know it's there somewhere. Why are .gov sites so ugly? Anyways, I changed it to mention Wunderground instead, just scroll down to the Astronomy section. They have a pretty well done iPhone website as well, worth checking out.
Bonus 2:
I'm unsure what your maps/images look like, but instead of having to edit them all to red on black, you could instead edit them to white on black, and put a layer on top of that which would allow the user to pick any color/intensity. Instead of using a layer, you could likely also programmatically implement it, but I think a colorizing layer would be much faster/easier.
An alternate method of doing this is to instead make your map transparent/black, and put a layer underneath that which could change colors to the user's liking. You could implement this on a finer scale (place rects of color behind objects/text/whatever else) to allow for full color customization.
Both use transparency to some extent, but I believe that the alternate method requires less overall work.
Bonus 3:
If you're already going through the effort to grab the GPS coordinates, it wouldn't be too much additional work to have it also check with another server, which would point out other users using the application locally on the map. Make sure this is disabled by default, as lots of users are uncomfortable with broadcasting their location to the world.
Science:
It's also worth mentioning that green is a horrible color to use if you're looking for night friendliness. Red is the color you want to be using. Red light doesn't cause the eye to release the enzymes which cause you to lose your nightvision (what you get once your eyes adjust). This is the reason the inside of military vehicles usually have red interior lights, and also why every movie you've ever seen with tactical anything uses lots of red lighting.
Red light is also used to preserve night vision in low-light or night-time situations, as the rod cells in the human eye aren't sensitive to red.
-Wikipedia
I learned this when I went up to Kitt Peak National Observatory this Thanksgiving on a family trip to Arizona. They hand out little keychains with red lights on them, so you can see where you're going in the dark. It was probably one of the coolest things I've ever participated in. I learned so much. If you're in the Tuscon area, or have another observatory local to you, I strongly suggest checking them out.
The keychain they gave me broke and it fell off somewhere, it's nowhere to be found :( It was my only souvenir. If anybody from KPNO happens to see this and wants to mail me another one, my email address is in my profile.
Also here's a link that goes into far much more detail than needed, but I know you're all going to google it anyways.
I did find another solution:
http://sourceforge.net/projects/photoshopframew/
Source code is available and i can run the tiles through photoshop as part of a chain of events for night viewing.